diff --git a/data_all_eng_slimpj/shuffled/split2/finalzhfq b/data_all_eng_slimpj/shuffled/split2/finalzhfq new file mode 100644 index 0000000000000000000000000000000000000000..3a6b040e983c4bdc60d15045be72831064c56fb8 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzhfq @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nHigher twist corrections to hadronic interactions become increasingly\nimportant at high energies. An effective way to resum all these\ncontributions is the color glass condensate picture. We review the\nKharzeev-Levin-Nardi (KLN)\\cite{KLN01} approach to heavy ion\ncollisions, which has successfully been applied to RHIC physics, and\nextrapolate this model to Iron-air collisions at energies up to the\nGZK ($\\approx 10^{19.7}$~eV) cutoff. So far, we only calculate\nmultiplicities. In the near future, we hope to present a full model\nwhich also treats the forward scattering in detail, which is important\nfor air showers.\n\n\\section{Review of BBL 1.0}\n\nIn Ref. \\cite{Drescher:2005ig} we introduced the black body limit\n(BBL) model for hadron-nucleus reactions. Valence quarks scatter\ncoherently off the gluon field in the target. The transverse momentum\nacquired in this reaction is of the order of the saturation momentum,\ndefined by the density of gluons in the target. This leads to\nindependent fragmentation of the leading quarks at high energies, when\nthe saturation momentum is high. The leading baryon effect known from\nlower energies is suppressed.\n\nGluon production in this approach is realized within the KLN model: a\nsimple ansatz for the unintegrated gluon distribution function (uGDF)\nis applied to the $k_t$-factorization formula. In this paper, we apply\nthis model to Iron-air collisions. \n\n\\section{KLN approach to nucleus-nucleus collisions}\n\n\\begin{figure}[tb]\n\\includegraphics[width=\\columnwidth]{phi.eps}\n\\caption{The construction of the uGDF for low densities. The full line\nshows the uGDF of a single nucleon. The dashed line is the average\nuGDF for $p_A\\approx0.5$. The dotted line shows the uGDF for the\naveraged low density.}\n\\label{fig:phi}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\includegraphics[width=\\columnwidth]{npart_C.eps}\n\\caption{The multiplicity of charged particles as a function of the centrality. The grey dotted line shows the result when taking $Q_s^2$ to be proportional to the average density $T_A$. The data is from PHOBOS \\cite{phobos}.}\n\\label{fig:npart}\n\\end{figure}\n\n\nIn the $k_\\perp$-factorization approach~\\cite{GLR83},\nthe distribution of produced gluons is given by\n\\begin{eqnarray}\n \\frac{dN_g}{d^2 r_{\\perp}dy}&=&\n \\frac{4N_c}{N_c^2-1} \\int^{p_\\perp^\\mathrm{max}}\\frac{d^2p_\\perp}{p^2_\\perp}\n \\int^{p_\\perp} {d^2 k_\\perp} \\;\\alpha_s \\nonumber \\\\\n &\\times & \\phi_A(x_1, {k}_\\perp^2) \\phi_B(x_2, ({p}_\\perp{-}{k}_\\perp)^2)\n \\label{eq:ktfac}\n\\end{eqnarray}\nwith $N_c=3$ the number of colors. Here, $p_\\perp$ and $y$ denote the\ntransverse momentum and the rapidity of the produced gluons,\nrespectively. The light-cone momentum fractions of the colliding gluon\nladders are then given by $x_{1,2} = p_\\perp\\exp(\\pm y)\/\\sqrt{s}$,\nwhere $\\sqrt{s}$ denotes the center of mass energy and $y$ is the\nrapidity of the produced gluon. We set\n$p_\\perp^\\mathrm{max}$ such that the minimal saturation scale\n$Q_s^\\mathrm{min}(x_{1,2})$ in the above integration is\n$\\Lambda_{QCD}=0.2$~GeV.\n\nThe KLN approach~\\cite{KLN01} employs the following uGDF:\n\\begin{equation}\n\\label{eq:uninteg}\n \\phi(x,k_\\perp^2;{r}_\\perp)\\sim\n \\frac{1}{\\alpha_s(Q^2_s)}\\frac{Q_s^2}\n {{\\rm max}(Q_s^2,k_\\perp^2)}~,\n\\end{equation}\nwhere $Q_s$ denotes the saturation momentum at the given momentum\nfraction $x$ and transverse position ${r}_\\perp$. The overall\nnormalization is determined by the multiplicity at mid-rapidity for the\nmost central collisions. The saturation scale for nucleus $A$ is\ntaken to be proportional to the density of participants,\n$n^A_\\mathrm{part}({r}_\\perp)$. This is not a universal quantity\nwhich depends only on the properties of a single nucleus, in other\nwords, the uGDF is not fully factorizable.\n\n\\begin{figure}[t]\n\\includegraphics[width=\\columnwidth]{Qss_C.eps}\n\\caption{The saturation scale as a function of the rapidity\n$y=\\log(1\/x)$. The vertical lines denote the forward and mid-rapidities\nfor the different energy regions.}\n\\label{fig:Qss}\n\\end{figure}\n\nA possible solution seems to be to define the saturation momentum\nsquared to be proportional to $T_A(r_\\perp)$, which is dependent on\none nucleus only and therefore respects factorization. However, this\nansatz cannot describe the data on the multiplicity as a function of\ncentrality for Au-Au collisions at RHIC, as shown in\nFig.~\\ref{fig:npart}. Why this does not work can best be seen with the\nfollowing example. We consider a peripheral Au-Au collision, and want\nto construct the uGDF at the edge of one of the nuclei. Here, $T_A$ is\nvery small, which means that only in some nucleus configurations, we\nactually find a nucleon at this position. Let us denote the\nprobability to find (at least) one nucleon with $p_A$. The uGDF at this\nposition is then $p_A$ times the uGDF of a single nucleon, as sketched\nin Fig. \\ref{fig:phi} (dashed line). Taking the average density $T_A$\nwould lead to the dotted line, and gives the wrong uGDF, since\naveraging has to be done after constructing the wave function and not\nbefore. The density under the condition to find at least one nucleon\nis $T_A\/p_A$, see \\cite{Drescher:2006ca} for details. The saturation\nscale is therefore:\n\n\\begin{equation}\n Q^2_{s,A}(x,{r}_\\perp) = \n 2\\,{\\rm GeV}^2\\left(\\frac{T_A({r}_\\perp)}{1.53 p_A({r}_\\perp)}\\right)\n \\left(\\frac{0.01}{x}\\right)^\\lambda\n \\label{eq:qs}\n\\end{equation}\nand the ansatz for the uGDF is\n\\begin{equation}\n\\phi_A = p_A ~ \\phi \\left( \\frac{T_A}{p_A} \\right)~. \\label{eq:phi_factorized}\n\\end{equation}\nIn Ref.~\\cite{Drescher:2006ca} we explain that the multiplicity is a\nhomogeneous function of order one in the density of both nuclei, and\nthe $n_{part}$ description is a good approximation of the factorized KLN\napproach.\n\n\n\n\\section{Results}\n\\begin{figure}[tb]\n\\includegraphics[width=\\columnwidth]{dndeta_LHC_C.eps}\n\\caption{The rapidity dependence of the multiplicity for central\n($b=2.4~$fm) Pb-Pb collisions at the LHC.}\n\\label{fig:dndeta_LHC}\n\\end{figure}\n\n\\begin{figure}[tb]\n\\includegraphics[width=\\columnwidth]{npart_LHC_C.eps}\n\\caption{The multiplicity of charged particles as a function of the\ncentrality for a 5500 GeV Pb-Pb collision at the LHC (b=2.4~fm). The\nupper curve for each $\\lambda$ shows fixed coupling, the lower curve\nshows running coupling evolution.}\n\\label{fig:npart_LHC}\n\\end{figure}\n\nBefore showing some results and extrapolating to high energies we want\ndo discuss the evolution of the saturation scale as a function of the\nenergy. Fig.~\\ref{fig:Qss} shows $Q_s$ as a function of the rapidity\nfor fixed coupling and running coupling evolution. See\nRef. \\cite{Drescher:2005ig} for details on these two types. The\ntwo evolution types show large differences at forward rapidity for LHC\nand GZK energies. Differences at mid-rapidity for RHIC and LHC are however\nless significant. Therefore, fixed and running coupling predict very\nsimilar results for RHIC and LHC overall multiplicities.\n\n\\subsection{Accelerator Results}\n\n\nVery good agreement with the data from the PHOBOS collaboration\n\\cite{phobos} is shown for the multiplicity at mid-rapidity as a\nfunction of centrality, as shown in Fig.~\\ref{fig:npart}. As already\nstated, the differences between fixed and running coupling evolution\nare small.\n\nIn Fig.~\\ref{fig:dndeta_LHC} we see predictions for central\n(b=2.4~fm) Pb-Pb collisions at the LHC ($\\sqrt{s}=5500~$GeV). The \ncentrality dependence of the multiplicity at mid-rapidity is shown in Fig.~\\ref{fig:npart_LHC}.\n\n\\subsection{Iron Air}\n\n\\begin{figure}[htb]\n\\includegraphics[width=\\columnwidth]{FeN_LHC_C.eps}\n\\caption{The rapidity dependence of charged particles for central Fe-N\ncollisions at the three reference energies. The results for\nQGSJET-II and KLN (fixed and running coupling evolution) are shown.}\n\\label{fig:FeN_LHC}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\includegraphics[width=\\columnwidth]{FeN_elab2_C.eps}\n\\caption{The total multiplicity of charged particles as a function of\nthe energy (in the laboratory frame) per nucleon. Sibyll predicts a\nhigher multiplicity, since it is a superposition model and scales with\nthe number of participants in the Iron nucleus as compared to p-air\ncollision. QGSJET-II and the color glass approach predict similar\nmultiplicities up to LHC energies.}\n\\label{fig:FeN_elab2}\n\\end{figure}\n\nFig.~\\ref{fig:FeN_LHC} shows the rapidity dependence of charged\nparticles for central Iron-Nitrogen collisions at the three reference\nenergies for RHC,LHC and GZK (200 GeV, 5500 GeV and 400 TeV,\nrespectively). Up to LHC energies, differences at mid-rapidity are\nsmaller than 15\\%. Only at GZK energies, we see a qualitative\ndifference.\n\nFig.~\\ref{fig:FeN_elab2} shows the total multiplicity of charged\nparticles as a function of the lab-energy per nucleon. Since in cosmic\nray physics the energy of a nucleus is usually the total energy,\nplotting $E_{lab}\/A$ is a useful quantity when comparing to proton-air\ncollisions. We also compare to the standard hadronic interactions\nmodels Sibyll~2.1 \\cite{sibyll} and QGSJET-IIc~\\cite{qgsjetII}. First,\nwe observe that nucleus air collisions in Sibyll are obtained by\nsuperposition of hadron air collisions. The multiplicity scales\ntherefore with the number of participants in the projectile and ranges\nfrom a factor 30 to 36 compared to proton air collisions. In the\nQGSJET-II model and the KLN approach, screening in the initial state\nreduces the multiplicity. The predicted results of these two\napproaches are quite similar up to LHC energies, whereas above this\nenergy, QGSJET-II predicts higher multiplicity than the color glass\napproach (running coupling evolution).\n\n\\subsection{Conclusions}\n\nWe compared the multiplicities of Iron-air collisions in the color\nglass condensate approach (KLN model) with hadronic interaction models\nused in air shower simulations. The results of QGSJET-II and KLN are\nquite similar up to LHC energies. Above this energy, QGSJET-II\npredicts higher multiplicities.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n In the context of the general theory of relativity\n (GR), the red-shift in the spectra of distant objects is\n a metric phenomena. In other words, knowing the metric\n of space-time, one can calculate the red-shift whether\n it is astrophysical or cosmological.This depends on the\n fact that the trajectory of the photons, in a\n background gravitational field, is assumed to be\n null-geodesic, while the metric of the space-time is the first\n integral of this null-geodesic. An alternative, and\n more general, method for calculating the red-shift is by\n using the results of theorems on null-geodesics, established in Riemannian\n geometry, by Kermack,McCrea, and\n Whittaker (KMW) in 1933. It is worth of mention that the\n applications of the two methods, in the context of GR,\n give identical results. In both methods, it is assumed\n that the trajectory of a photon in a gravitational field\n is a null-geodesic. This implies the neglect of the\n effect of spin of the photon (spin independent\n trajectory), on its trajectory, if any. In calculating\n the red-shift, the first method is more easier and direct than\n the second; and since the two methods are equivalent in the context\n of GR, authors usually use the first method neglecting the\n second one.\n\n Recently, some evidences indicating probable\n dependence of trajectories of spinning particles on\n their spin, were reported. One of these evidences, on the laboratory scale, is the discrepancy\n between theoretical calculations and the\n results of the COW-experiment(Overhauser and Colella (1974),Colella et al. (1980)\n and Werner et al.(1988). Another\n evidence, on the galactic scale, concerning the\n arrival times of photons, neutrinos (and gravitons!) from\n SN1987A (Schramm and Truran(1990), Weber(1994) and De Rujula(1987)).\n On the other hand, a new path equation, in the parameterized absolute parallelism (PAP) geometry,\n giving the trajectory of a spinning particle in a\n gravitational field ( spin dependent trajectory ), is suggested\n (Wanas(1998)). It is worth of mention that\n the use of this equation has given a satisfactory interpretation of the discrepancy in the COW\n experiment Wanas et al. (2000), and can account for the time delay of spinning particles\n coming from SN1987A(Wanas et al.(2002) ). If we use this equation, to describe the trajectory of photons,\n (spin one massless particle), we encounter a problem\n concerning red-shift calculations. That is, the metric of space-time is not the first integral of\n the new path equation; and the red-shift is no longer a metric\n phenomena.\n Consequently, one can not calculate the red-shift using\n any of the above mentioned methods. One way to solve\n this problem is to develop theorems on the null-path, similar to those of\n KMW-theorems, in order to apply them\n for obtaining the red-shift extracted from spinning\n particles. This is the aim of the present work.\\\\\n\n For this aim, we give a brief account on KMW-\n theorems in section 2. In section 3, we review the main\n features of the spin dependent path equation. The validity of the KMW-\n theorems are proved in PAP geometry in section 4. In section 5 we derive a general\n formula for red-shift, taking into\n account the spin of the particle from which we extract\n the red-shift. The work is discussed in section 6.\n\\section{KMW- Theorems}\n\n The following theorems on null-geodesics, were established by\n KMW (Kermak et al.(1933)) in Riemannian space ~$R_n$~ of dimensions (n), whose\n metric is given by:\n \\begin{equation}\n dS^2=g_{\\mu\\nu}~dx^\\mu~dx^\\nu.\n \\end{equation}\n Consider the null-geodesic $~\\Gamma~$ connecting the two neighboring $ C_0 $ and $ C_1 $ in $ R _n $. The tangent null-vector\n ( transport vector in (Kermak et al.(1933))) is defined by,\n\n \\begin{equation}\n \\eta^{\\rho} \\ {\\mathop{=}\\limits^{\\rm def}}\\ \\frac{dx^\\rho}{d\\lambda},\n \\end{equation}\n where $~\\lambda~$ is a parameter characterizing $~\\Gamma~$. The equation for\n $~\\Gamma~$ is given by,\n \\begin{equation}\n {\\frac{d\\eta^\\rho}{d~\\lambda}}\n +\\cs{\\mu}{\\nu}{\\rho}~{\\eta^\\mu}{\\eta^\\nu}=0,\n \\end{equation}\n where $\\cs{\\mu}{\\nu}{\\rho}$ is the Christoffel symbol of the second type. Equation (3) follows from the Euler-Lagrange equation,\n \\begin{equation}\n \\frac{d}{d\\lambda}(\\frac{\\partial~T}{\\partial~\\eta^\\mu})\n -\\frac{\\partial~T}{\\partial~x^\\mu}=0,\n \\end{equation}\n upon taking,\n \\begin{equation}\n ~T ~\\ {\\mathop{=}\\limits^{\\rm def}}\\ ~\\frac{1}{2}~g_{\\mu\\nu}~\\eta^\\mu~\\eta^\\nu~.\n \\end{equation}\n\n Let $~ \\Gamma^{'}~ $ be a null-geodesic, parallel to $~\\Gamma~$ and\n passing through the point$ ~C^{'}~$ near C, and let $~\\xi^\\sigma~$\n denotes the vector$~ CC{'}~$. Let a scalar J be defined as,\n\n \\begin{equation}\n J \\ {\\mathop{=}\\limits^{\\rm def}}\\ \\eta_{\\alpha}~{\\xi^{\\alpha}} ,\n \\end{equation}\n where $~\\eta_\\alpha~$is the covariant form of the vector $~\\eta^\\alpha~$.\n\n\n The KMW -Theorems can be stated as,\\\\\n\n{\\subsection*{\\bf {Theorem (I)}}}\n\n{\\it{\"The scalar J given by (6) is independent of\n the choice of the direction CC', and is also independent of the\n position of C on the null-geodesic $\\Gamma$. \"}}\\\\\n It depends only on the two null-geodesics as a whole and not on any\n particular point on them. This is the first theorem which was\n rigorously proved by KMW.\n\n Consider a null-geodesic parallel to $\\Gamma$, of which there\n are $~\\infty ^{n-1}~$. If we form the scalar \"J\" corresponding\n to $\\eta^\\mu $ and any of these null-geodesic, we find that for\n any particular value of \"J\" there are$~\\infty ^{n-2}~$ parallel\n null-geodesics.\\\\\n\n{\\subsection*{\\bf {Theorem (II)}}}\n\n{\\it {\"If the set of $~\\infty ^{n-2}~$ null-geodesics lies in a\n$~R_{n-1}~$\n which intersects a local flat subspace $~E_{n-1}~$,\n at C, in a local flat subspace $~E_{n-2}~$ , then\n $~E_{n-2}~$ is perpendicular to the projection of $~\\Gamma~$ in\n $~E_{n-1}~$.\"}}\n\n As a consequence of these two theorems KMW were able to derive\n the following formula for the red-shift extracted from\n photons, assuming that a photon is moving along null-geodesic of the metric,\n \\begin{equation}\nZ={\\frac{\\Delta ~\\lambda}{\\lambda}}=\n{\\frac{[{\\rho}_{\\mu}~{\\eta}^{\\mu}]_{C_1}~-~{[{\\varpi}_{\\mu}~{\\eta}^{\\mu}]_{C_0}}}\n{[{\\varpi}_{\\mu}~{\\eta}^{\\mu}]_{C_0}}}.\n\\end{equation}\n where $~\\rho_\\mu, \\varpi_\\mu~$\nare the covariant form of the tangents to the geodesics of the\nobservers at $~C_1, C_0~$ respectively.\n\\section{Spin-Dependent Path Equation}\nIt is well known that Riemannian geometry possesses two types of\nthe paths.\n The first is the geodesic path and the second is the null-geodesic.\n The equation of these two paths can be written in the general form,\n \\begin{equation}\n {{\\frac{d^2x^\\mu}{d~p^2}} +\n \\cs{\\alpha}{\\beta}{\\mu}~{\\frac{dx^\\alpha}{d~p}}~{\\frac{dx^\\beta}{d~p}}=\n 0},\n \\end{equation}\n where $p$ is a parameter characterizing the trajectory of massive\n or massless particle (cf. Adler et al.(1975)). This parameter may be related\n to the parameter $S$, of (1), by,\n\n \\begin{equation}\n dS^2 = E dp^2\n \\end{equation}\n where $E$ is a numerical parameter taking the values\n \\begin{eqnarray}\n E&=&0,~for ~a~ null-geodesic ~,\\nonumber\\\\\n E&=&1, ~~~for~a ~geodesic~.\n \\end{eqnarray}\n\n Wanas et al. (1995), directed their attention to the absolute parallelism space\n (AP-space), and by generalizing the method given by Bazanski (1977), (1989),\n they derived the following set of three path equations :\n \\begin{equation}\n {{\\frac{dJ^\\mu}{dS^-}} + \\cs{\\nu}{\\sigma}{\\mu}\\ J^\\nu J^\\sigma = 0},\n \\end{equation}\n \\begin{equation}\n {{\\frac{dW^\\mu}{dS^o}} + \\cs{\\nu}{\\sigma}{\\mu}\\ W^\\nu W^\\sigma =-\n {\\frac{1}{2}}\\Lambda_{(\\nu \\sigma)} . ^\\mu~~ W^\\nu W^\\sigma},\n \\end{equation}\n \\begin{equation}\n {{\\frac{dV^\\mu}{dS^+}} + \\cs{\\nu}{\\sigma}{\\mu}\\ V^\\nu V^\\sigma =-\n \\Lambda_{(\\nu \\sigma)} . ^\\mu~~ V^\\nu V^\\sigma} ,\n \\end{equation}\n where $J^\\mu$ , $W^\\mu$ and $V^\\mu$ are the tangents to the corresponding\n curves whose parameters are $S^-$, $S^0$ and $S^+$ respectively,\n and $\\Lambda^\\alpha _{~\\mu \\nu}$ is the torsion of the AP-geometry defined by,\n \\begin{equation}\n \\Lambda^\\alpha_{~ \\mu \\nu} \\ {\\mathop{=}\\limits^{\\rm def}}\\ ~ \\Gamma^\\alpha_{. \\mu \\nu}\n -\\Gamma^\\alpha_{. \\nu \\mu},\n \\end{equation}\n where $\\Gamma^\\alpha_{. \\nu \\mu}$ is the non-symmetric affine connection\n defined as a consequence of the condition for AP.\n\n Wanas (1998) defined a general expression for a connection formed by taking linear\n combinations of the available connections in the AP-space. He mentioned that the\n metricity condition is necessary but not sufficient to define the\n Christoffel symbol. He generalized the three path equations (11), (12) and (13) in the following equation,\n \\begin{equation}\n {{\\frac{dZ^\\mu}{d\\tau}} + \\cs{\\nu}{\\sigma}{\\mu}\\ Z^\\nu Z^\\sigma =-\n {b}\\Lambda_{(\\nu \\sigma)} . ^\\mu~~ Z^\\nu Z^\\sigma},\n \\end{equation}\n where b is a parameter given by $b={\\frac{n}{2}}{\\alpha}{\\gamma}$,\n$n(=0,1,2,...,)$ is a natural number $\\alpha~$ is the fine structure\nconstant and $\\gamma~$\n is a numerical free parameter,\n $Z^{\\mu}{\\ {\\mathop{=}\\limits^{\\rm def}}\\ }{\\frac{d~X^\\mu}{d~\\tau}}$,\n and $\\tau $ is the evolution\n parameter along the new general path (15) associated with the general connection:\n \\begin{equation}\n \\nabla^\\alpha_{.~\\mu \\nu }=\n ~\\cs{\\mu}{\\nu}{\\alpha}+~b~ \\gamma^\\alpha_{. ~\\mu\n \\nu} ,\n \\end{equation}\nThe geometric structure characterized by the connection (16) is\ncalled the PAP geometry (Wanas(2000)). It is worth of mention that\nequation (15), represents a\n generalization of the three path equations given above. This\n equation will reduce to the equation of geodesic (null-geodesic upon\n reparameterization) in the Riemannian geometry, when the parameter $b=0$.\n\n It has been shown that (15), can be used to represent trajectories of spinning test particles, massive or\n massless, in a gravitational field.\n\n\n\\section{Theorems on Null-Paths in PAP-spaces}\n\n\nLet $~{\\Gamma}~$, $~{\\Gamma}^{'}~$ be two neighboring null-paths of\nthe type given by (15) defined in PAP-space of dimensions $(n)$. Let\nC be a point on $~{\\Gamma}~$ and $~{C^{'}}~$ be a neighboring point\non $~{\\Gamma~}'~$. Let $~\\zeta^\\mu~$ be the vector $~{CC^{'}}~$\nconnecting the two points. Let us define the following scalar,\n \\begin{equation}\n ~{J}{\\ {\\mathop{=}\\limits^{\\rm def}}\\ } Z^{\\mu}{\\zeta_\\mu},\n \\end{equation}\nwhere $~Z^\\mu~$ is the null tangent to the path\n(15), defined at C. Then we can prove the following theorem:\\\\\n\n {\\subsection*{\\bf {\\it {Theorem (I)}}}}\n {\\it {\"The scalar $~{J}$ is independent of the position of\n the point C on the null-path $~\\Gamma~$ and is also\n independent of the choice of the direction $~CC^{'}~$ but depends on the two null-paths themselves.\"}}\\\\\nNow, as mentioned in the previous section, the equation of the null\npaths in the PAP Geometry is given by (15) and the equation of\n null geodesic in The Riemannian Geometry is\n given by (8). It is clear that equation (15) tends to equation (8), if$~b=0~$.\n Keeping in mind the fact that for every absolute parallelism\n space there exists an associated Riemannian one, then we can\n relate the objects in the first equation (15) to the objects of the\n second (8) by the following relations\n\n\n \\begin{equation}\n Z^\\nu=\\eta^\\nu~(1+g(b)),\n \\end{equation}\n\\begin{equation}\n{\\tau}=p~(1+f(b)),~and\n\\end{equation}\n\\begin{equation}\n\\zeta_\\nu=\\xi_\\nu~(1+l(b)).\n \\end{equation}\nwhere $g(b), f(b)$ and $l(b)$ are positive functions of the\nparameter b such that these functions tend to zero when $b$ goes to\nzero. Now let us evaluate the scaler$ ~{J}{\\ {\\mathop{=}\\limits^{\\rm def}}\\ }\nZ^{\\mu}{\\zeta_\\mu}~$. By using equation (19), we get\n\\begin{equation}\n\\frac{d~p}{d~{\\tau}}=\\frac{1}{(1+f(b))}\n\\end{equation}\nIf we use (18),(20) and (21), we have\n\\begin{eqnarray}\n\\frac{d~}{d~{\\tau}}(Z^{\\mu}{\\zeta_\\mu})&=& \\frac{1}{(1+f(b))}{\\frac{d~}{d~p}}(Z^{\\mu}{\\zeta_\\mu})\\nonumber\\\\\n &=& \\frac{1}{(1+f(b))}({\\frac{d~}{d~p}}(\\eta^\\nu~(1+g(b))(\\xi_\\nu~(1+l(b))))\\nonumber\\\\\n&=&\n\\frac{1}{(1+f(b))}(1+g(b))(1+l(b))({\\frac{d~}{d~p}}(\\eta^\\nu~\\xi_\\nu~))\n\\end{eqnarray}\nNow by recalling equation (6) and theorem(I) of KMW, we get\n\\begin{eqnarray}\n\\frac{d~}{d~{\\tau}}(Z^{\\mu}{\\zeta_\\mu})&=& 0, \\nonumber\\\\\ni.e~~~~{Z^\\mu}{\\zeta_\\mu}&=&constant.\n\\end{eqnarray}\n\n This result proves theorem (I) on the null-path of the PAP-space. Now,\n as an extension of the idea of null-path$~\\Gamma~$ passing through a given\n point C in the PAP-space $~T_n~$, one can find $~\\infty ^{n-1}~$ of\n null-paths, in the neighborhood of the point C, parallel to the\n null-path$~\\Gamma~$. The second theorem can be stated as\n follows:\\\\\n {\\subsection*{\\bf {\\it {Theorem (II)}}}}\n {\\it {For a definite value of the scalar J, we have $~\\infty ^{n-2}~$\n of parallel null-paths lying in a subspace $~T_{(n-1)}~$, which\n intersects\n a local flat subspace $~E_{(n-1)}~$ of (n-1) dimensions at the point C\n in a local (n-2)-dimensions flat subspace$~E_{(n-2)}~$. This $~E_{(n-2)}~$\n is perpendicular to the projection of the null-path $~\\Gamma~$\n in $~E_{(n-1)}~$.}}\\\\\n\n To prove this theorem, we shall assume that at the point C, or at\n any point in its neighborhood, there is no singularity. If we assume\n any rectangular axes at C, such that\n the direction ratios along the null-path $~\\Gamma~$ are\n $~(\\lambda_1,\\lambda_2,......,\\lambda_n)~$, then:\n \\begin{equation}\n {\\lambda_1}^2+{\\lambda_2}^2+......+{\\lambda_n}^2=0.\n \\end{equation}\n So, the tangent null-vector $Z^\\mu~$, to the\n null-path at C, has the components, $~(m\\lambda_1,m\\lambda_2,......,m\\lambda_n)~$,\n where $(m)$ is some\n constant. If the coordinates of the point C are $~(x^1,x^2,....,x^n)~$, and if we\n take the vector $~Z^\\mu~$, which is defined above, as\n $~(x^1,x^2,...,x^n)~$ , then the scalar J takes the form:\n \\begin{equation}\n J=(x^1\\lambda_1+x^2\\lambda_2+...+x^n \\lambda_n)m,\n \\end{equation}\n i.e\n \\begin{equation}\n (x^1\\lambda_1+x^2\\lambda_2+....+x^n\\lambda_n)=\\frac{~{J}}{m}.\n \\end{equation}\n Due to theorem (I), J remains constant wherever $~(x^1,x^2,...,x^n)~$\n lies in the hyperplane, i.e, for any other point$~C^{'}~$on null\n path $~\\Gamma^{'}~$, parallel to$~\\Gamma~$, of coordinates\n $~(x^1+k~\\lambda_1,x^2+k~\\lambda_2,...,x^n+k~\\lambda_n)~$, where\n k is a variable parameter depending on the null-path\n$~{\\Gamma^{'}}~$, J\n remains constant. So, if we take\n the components of $~Z^\\mu~$ at point $~C^{'}~$ on the null-path\n $~\\Gamma^{'}~$,to be\n $~(x_1+k\\lambda_1,x_2+k\\lambda_2,....,x_n+k\\lambda_n)~$,and\n substitute in equation (28), we have:\\\\\n $$~{J}=m(x^1\\lambda_1+k{\\lambda_1}^2+x^2\\lambda_2+k{\\lambda_2}^2+.....+x^n\\lambda_n+k{\\lambda_n}^2).$$\n Then, using (24), we get the same equation (25). This means that any\n one of the set of the null-paths parallel to the null-path$~ \\Gamma~$,\n must lie in the hyperplane given by (26), in order to keep $ ~{J} $ constant. Now if we put $~x^1=0~$, it follows\n directly\n that all the points in which these null-paths cut any local subspace\n $~E_{(n-1)}~$ lie in a local subspace $~E_{(n-2)}~$ given by:\n $$ (x^2\\lambda_2+.....+x^n\\lambda_n)=\\frac{~{J}}{m}.$$\n This $~E_{(n-2)}~$ is perpendicular to the null-path $~\\Gamma~$ whose\n projection in $~E_{(n-1)}~$ is given\n by:\n\n \\begin{equation}\n \\frac{x^2}{\\lambda_2}=.....=\\frac{x^n}{\\lambda_n}.\n \\end{equation}\nHence, the second theorem is proved.\n\n By using the two previous theorems and the same idea of KMW, we can\n write a general expression of the red-shift. This expression depends\n essentially on the idea of the wave fronts, which are represented\n by a set of null-paths passing through the points of the wave front.\nTheir projections in the local subspace are perpendicular to this\n wave front. So, the actual wavelength is determined by the perpendicular\n distance between the wave fronts corresponding to two parallel sets of\n null-paths. In other words, it is the interval between the\n points of intersection of the two local subspaces defined by the two sets of parallel null-paths and the observer's world line.\n\n\n\\section{General Expression of Red-Shift in PAP-Space}\nIn the previous section, we have shown that the two theorems on\n null-geodesics, proved by KMW, are also applicable to the\n null-paths (15). Now , we are going to assume that the trajectory of a photon\n (spin-1 particle) in a gravitational field is spin dependent and given by equation\n (15).\n So, we can easily establish a general formula for the red-shift\n of spectral lines similar to that given by KMW.\n\n Consider a null path of the form (15) connecting the two\n points $~C_1~$, $~C_0~$ at which two observers A and B are\n located, respectively. The null paths $~\\Gamma~$ belongs to\n the same wave front observed by A and B. Let\n $~{\\eta_1}^{\\mu}~$ and $~{\\eta_0}^{\\mu}~$ be the components\n of the transport null tangent to$~\\Gamma~$at$~C_1~$ and\n $~C_0~$, respectively. Let$~\\Gamma^{'}~$ be a null path\n parallel to $~\\Gamma~$ , belonging to the succeeding wave front,\n intersecting the world lines of A and B at$~C_1'~$ and\n $~C_0'~$,\n respectively. If $\\lambda_1~$, $\\lambda_0~$ are the\n wavelengths of the same spectral line as observed by A and B,\n respectively, then the components of the\n vectors$~C_1{C_1}'~$and$~C_0{C_0}'~$ are\n $~\\rho^{\\mu}~,\\varpi^{\\mu}~$,which represent the components of the\n unit tangents to the world lines of A and B, respectively.\n These unit tangents are solutions of (15), for A and B, upon\n taking $b=0$. The vectors $\\rho^{\\mu}$ and $\\varpi^{\\mu}$ are the\n values of the vector\n $~\\xi^{\\mu}$, of theorem (I), evaluated at$~C_1~$, $~C_0~$\n respectively, while $~{\\eta_1}^{\\mu}~$, $~{\\eta_0}^{\\mu}~$\n represent the\n values of the vector $~Z^\\mu~$, of the same theorem, evaluated\n at$~C_1~$, $~C_0~$as stated above. Now applying theorem (I)\n and equating the values of $~{J}$ at$~C_1~$, and at $~C_0~$we get\n\n \\begin{equation}\n \\lambda_1~{\\eta_1}^{\\mu}~\\rho_\\mu=\\lambda_0~{\\eta_0}^{\\mu}~\\varpi_\\mu\n \\end{equation}\n\\begin{equation}\n i.e. \\frac{\\lambda_0}{\\lambda_1}\n =\\frac{{\\rho}_{\\mu}~{{\\eta}^{\\mu}}_1}{{\\varpi}_{\\mu}~{{\\eta}^{\\mu}}_0},\n \\end{equation}\nthus,\n\\begin{equation}\n \\frac{\\Delta\\lambda}{\\lambda_1}= \\frac{\\lambda_0-\\lambda_1}{\\lambda_1}\n ={\\frac{{\\rho}_{\\mu}~{{\\eta}^{\\mu}}_1}{{\\varpi}_{\\mu}~{{\\eta}^{\\mu}}_0\n }}-1.\n \\end{equation}\n This gives a general formula for the red-shift of spectral\n lines coming from a distant object.\n\\section{Concluding Remarks}\nIn the present work,we have investigated the validity of two\nimportant theorems on null-geodesics in the context of the PAP\n-geometry. These two theorems are important to establish a general\nformula for the red-shift of spectral lines especially when the\ntrajectories of massless particles are spin dependent. Equation (15)\nis the equation representing such trajectories .This equation can be\nused as an equation of motion in the context of any field theory\nwritten in the AP-geometry including GR (cf. Wanas (1990)).\n\nIn conclusion we can write the following general remarks\\\\\n (1) In the present work, we tried, as\nfar as we could, to use the same notations, as those used in the\noriginal work of KMW in order to facilitate comparison.\\\\ (2) The\npath equation (15) can be used to represent the trajectory of a test\nparticle or the trajectory of a massless particle, in a background\ngravitational field, upon adjusting the parameter $~\\tau~$. The\nright hand side of this equation is suggested to represent a type of\ninteraction between the spin of the moving particle and the torsion\nof the background gravitational field. The\nparameter $(b)$ is a spin dependent parameter (Wanas (1998)).\\\\\n(3) The two theorems on (15), when it represents null paths, can be\nreduced to the original KMW theorems, reviewed in section 2, upon\ntaking $b=0$.\\\\ (4) Equation (30) gives the red-shift taking into\naccount the spin-torsion interaction. This equation appears to be\nthe same as that given by KMW, but the main difference is that the\neffect of the spin of the moving particle will appear in the values\nof the null-vectors$~{\\eta_0}^{\\mu}~$and $~{\\eta_1}^{\\mu}~$ which\nare\nsolutions of equation(15) and not of the equation of null-geodesic.\\\\\n (5) In\nKMW-paper, the authors used formula (7) to get Doppler shift. In an\nattempt to interpret the solar limb effect (Mikhail, et al. (2001)),\nthe use of (7) have been widened to account, not only for the\nrelative radial\nvelocity between A and B (Doppler-shift), but also for:\\\\\n (i) The effect of gravity (gravitational red-shift).\\\\\n (ii) The effect of the direction of the null-geodesic.\\\\\n In addition to these effects, the formula (30) will account also\n for the effect of the spin-torsion interaction on the value of the\n red-shift.\\\\\n (6) Formula (30) can be used to get the red-shift whether it\n is treated as a metric phenomena or not.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nOne of the main insights from queueing theory is that the queue length and sojourn time are of the order $1\/(1-\\rho)$, as the traffic intensity of the system $\\rho$ approaches 100 percent utilization. This insight dates back to \\citet{kingman1961single} and \\citet{prokhorov1963transition} and, appropriately reformulated, remains valid for queueing networks and multiple server queues \\citep{whitt2002stochastic, garmanik2006validity, braverman2017heavy}. This picture can change dramatically when the scheduling policy is no longer First-In-First-Out ($\\mathrm{FIFO}$). \\citet{bansal2005average} was the first to point out that the mean sojourn time (a.k.a.\\ response time, flow time) of a user is of $o(1\/(1-\\rho))$ in the $\\mathrm{M\/M\/1}$ queue when the scheduling policy is Shortest Remaining Processing Time ($\\mathrm{SRPT}$). This result was later generalized to several non-exponential service time distributions in \\citet{lin2011heavy}. More recently, \\citet{puha2015diffusion} derived a process limit theorem for the $\\mathrm{SRPT}$ queue length, under an assumption that implies that all moments of the service time are finite.\n\nAs $\\mathrm{SRPT}$ requires information on service times in advance, the question was raised if the same growth rate in heavy traffic can be reached with a blind scheduling policy, a question that was answered negatively in \\citet{bansal2018achievable}. Specifically, the authors showed that for every blind scheduling policy, there exists a service time distribution under which the growth rate in heavy traffic of the mean sojourn time is at least a factor $\\log(1\/(1-\\rho))$ larger than the growth rate of $\\mathrm{SRPT}$. \\citeauthor{bansal2018achievable} also construct a scheduling policy that achieves this growth rate, but this policy rather complicated as it involves randomization. All of the results mentioned thus far only concern the mean sojourn time, and it is of interest to obtain information about the distribution of the sojourn time as well.\n\nMotivated by these developments, we consider the Foreground-Background ($\\mathrm{FB}$) scheduling policy in this paper. More precisely, we investigate the invariant distribution of the sojourn time of a customer in an $\\mathrm{M\/GI\/1}\/\\mathrm{FB}$ queue. The $\\mathrm{FB}$ policy operates as follows: priority is given to the customer with the least-attained service, and when multiple customers satisfy this property, they are served at an equal rate. The only heavy-traffic results for $\\mathrm{FB}$ we are aware of are of ``big-$\\O$'' type and are known in case of deterministic, exponential, Pareto and specific finite-support service times \\citep{bansal2006handling, nuyens2008foreground}. For deterministic service times, it is easy to see that all customers under $\\mathrm{FB}$ depart in one batch at the end of every busy period, and as a result the growth rate in heavy traffic is very poor in this case $\\O((1-\\rho)^{-2})$. The behaviour of $\\mathrm{FB}$ is much better for service-time distributions with a decreasing failure rate, as $\\mathrm{FB}$ then optimizes the mean sojourn time among all blind policies \\citep{righter1989scheduling}. For more background on the $\\mathrm{FB}$ policy we refer to the survey by \\citet{nuyens2008foreground}.\n\nThe main results of this paper are of three types:\n\\begin{enumerate}\n\\item We characterize the exact growth rate (up to a constant independent of $\\rho$) of the sojourn time in heavy traffic under very general assumptions on the service time distribution. As in \\citet{bansal2006handling} and \\citet{lin2011heavy}, we find a dichotomy: when the service time distribution has finite variance, the mean sojourn time $\\mathbb{E}[T_\\mathrm{FB}^\\r] = \\Theta\\left(\\frac{\\overline{F}(\\inv{G}(\\r))}{(1-\\r)^2}\\right)$. Here $\\overline{F}(x)=1-F(x)$ is the tail of the service time distribution and $\\inv{G}$ is the right-inverse of the distribution function of a residual service time; a detailed overview of notation can be found in Section~\\ref{subsec:matuszewska}. In the infinite variance case, we find that $\\mathbb{E}[T_\\mathrm{FB}^\\r] = \\Theta\\left(\\log \\frac{1}{1-\\r}\\right)$. This result is formally stated in Theorem~\\ref{thm:ETMatuszewska}. The precise conditions for these results to hold involve Matuszewska indices, a concept that will be reviewed in Section~\\ref{sec:preliminaries}. The behaviour of $\\overline{F}(\\inv{G}(\\r))$ is quite rich, as will be illustrated by several examples.\n\n\\item Contrary to the results in \\citet{bansal2006handling} and \\citet{lin2011heavy}, we have been able to obtain a more precise estimate of the growth rate of $\\mathbb{E}[T_\\mathrm{FB}^\\r]$. It turns out that extreme value theory plays an essential role\nin our analysis, and the limiting constant factor in front of the growth rate $\\frac{\\overline{F}(\\inv{G}(\\r))}{(1-\\r)^2}$ crucially depends on in which domain of attraction the service-time distribution is. This result is summarized in Theorem~\\ref{thm:ETextreme} and appended in Theorem~\\ref{thm:TmeanGumbel}. When the service-time distribution tail is regularly varying, it is shown that the growth rate of the sojourn time under $\\mathrm{FB}$ is equal to that of $\\mathrm{SRPT}$ up to a finite constant. A comparison of the sojourn times under $\\mathrm{FB}$ and $\\mathrm{SRPT}$ is given in Corollary~\\ref{cor:FBvsSRPT}.\n\n\\item When analysing the distribution, we first show that $T_\\mathrm{FB}^\\r\/\\mathbb{E}[T_\\mathrm{FB}^\\r]$ converges to zero in probability as ${\\r\\uparrow 1}$. To still get a heavy traffic approximation for $\\P(T_\\mathrm{FB}^\\r>y)$, we state a sample path representation for the sojourn time distribution for a job that requires a known amount of service. We then use fluctuation theory for spectrally negative L{\\'e}vy processes to rewrite this representation into an expression that is amenable to analysis; in particular, we obtain a representation for the Laplace transform of the \\textit{residual} sojourn time distribution is of independent interest, from which a heavy-traffic limit theorem follows. Finally, the Laplace transform implies an estimate for the tail of $T_\\mathrm{FB}$. \n\nMore specifically, our results show that $\\P((1-\\r)^2 T_\\mathrm{FB} > y)\/ \\overline{F}(\\inv{G}(\\r))$ converges to a non-trivial function $g^*(y)$, for which we give an integral expression in terms of error functions. Along the way, we derive a heavy traffic limit for the total workload in an $\\mathrm{M\/GI\/1}$ queue, with truncated service times that also seems to be of independent interest (see Proposition~\\ref{prop:WtoExp}). As in the analysis for the mean sojourn time, ideas from extreme value theory play an important role in the analysis, and the limit function $g^*$ depends on which domain of attraction the service-time distribution falls into. A precise description of this result can be found in Theorem~\\ref{thm:Ttail}.\n\\end{enumerate}\n\nDespite the fact that extreme theory appears both in our analysis and our end results, the precise role of extreme value theory is not entirely clear from the analysis in this paper. A challenging topic for future research is to get a completely probabilistic proof of our result; e.g.\\ a proof that does not use explicit integral expressions for the mean sojourn time. To this end, it makes sense to consider a more general class of scheduling policies, for example the class of SMART scheduling policies considered in \\citet{wierman2005nearly} and \\citet{nuyens2008preventing}. This is beyond the scope of the present paper.\n\nThe rest of the paper is organized as follows. Section~\\ref{sec:preliminaries} formally introduces the model that is considered. Section~\\ref{sec:mainresults} presents all our main results on the asymptotic behaviour of the mean and the tail of the sojourn time distribution under $\\mathrm{FB}$. The results on the mean are then proven in Sections~\\ref{sec:Tmean} and \\ref{sec:TmeanGumbel}, whereas the results on the tail distribution are supported in Sections~\\ref{sec:ToverET} and \\ref{sec:Ttail}.\n\n\n\n\n\n\\section{Preliminaries}\n\\label{sec:preliminaries}\nIn this paper we consider a sequence of $\\mathrm{M\/GI\/1}$ queues, indexed by $n$, where the $i$-th job requires $B_i$ units of service for all $n$. For convenience, we say that a job that requires $x$ units of service is a \\textit{job of size $x$}. All $B_i$ are independently and identically distributed (i.i.d) random variables with cumulative distribution function (c.d.f.) $F(x) = \\P(B_i\\leq x)$ and finite mean $\\mathbb{E}[B]$. We assume that $F(0)=0$, and denote $x_R:=\\sup\\{x\\geq 0: F(x) < 1\\}\\leq \\infty$. Jobs in the $n$-th queue arrive with rate $\\l^{(n)}$, where $\\l^{(n)}<1\/\\mathbb{E}[B_1]$ to ensure that the $n$-th system experiences work intensity $\\r^{(n)} := \\l^{(n)}\\mathbb{E}[B_1] < 1$. For notational convenience, we let $B$ denote a random variable with c.d.f.\\ $F$.\n\nLet $\\overline{F}(x):=1-F(x)$ and $\\Finv(y):=\\inf\\{x\\geq 0: F(x)\\geq y\\}$ denote the complementary c.d.f.\\ (c.c.d.f.) and the right-inverse of $F$ respectively. The random variable $B^*$ is defined by its c.d.f.\\ $G(x):=\\P(B^*\\leq x) = \\int_0^x \\overline{F}(t)\/\\mathbb{E}[B] \\,\\mathrm{d} t$ and has $k$-th moment $\\mathbb{E}[(B^*)^k]=\\mathbb{E}[B^k]\/(k\\mathbb{E}[B])$. Since $\\inv{G}(y)$ is continuous and strictly increasing, its (right-)inverse $\\inv{G}(y)$ satisfies $\\inv{G}(G(x))=x$. Also, we recognize $h^*(x) := \\frac{\\overline{F}(x)}{\\mathbb{E}[B]\\bar{G}(x)}$ as the failure rate of $B^*$. One may deduce that $h^*(x)$ equals the reciprocal of the mean residual time; $h^*(x)=1\/\\mathbb{E}[B-x\\mid B>x]$.\n\n\n\\subsection*{Foreground-Background scheduling policy}\nJobs are served according to the Foreground-Background ($\\mathrm{FB}$) policy, meaning that at any moment in time, the server equally shares its capacity over all available jobs that have received the least amount of service thus far. First, we are interested in characteristics of the sojourn time $T^{(n)}_{\\mathrm{FB}}$, defined as the duration of time that a generic job spends in the system. In order to analyse this, we consider an expression for the mean sojourn time of a generic job \\textit{of size $x$}, $\\mathbb{E}[T^{(n)}_{\\mathrm{FB}}(x)]$, for which \\citet{schrage1967queue} states that\n\\begin{equation}\n \\mathbb{E}[T^{(n)}_{\\mathrm{FB}}(x)] = \\frac{x}{1-\\r^{(n)}_x} + \\frac{\\mathbb{E}[W^{(n)}(x)]}{1-\\r^{(n)}_x} = \\frac{x}{1-\\r^{(n)}_x} + \\frac{\\lambda^{(n)} m_2(x)}{2(1-\\r^{(n)}_x)^2},\n \\label{eq:Tx}\n\\end{equation}\nwhere $\\rho^{(n)}_x := \\lambda^{(n)} \\mathbb{E}[B\\wedge x] = \\rho \\P(B^*\\leq x)$ and $m_2(x) := \\mathbb{E}[(B\\wedge x)^2] = 2 \\int_0^x t\\overline{F}(t)\\,\\mathrm{d} t$ are functions of the first and second moments of $B\\wedge x:=\\min\\{B,x\\}$, and $W^{(n)}(x)$ is the steady-state waiting time in a $\\mathrm{M\/GI\/1}\/\\mathrm{FIFO}$ queue with arrival rate $\\l^{(n)}$ and jobs of size $B_i\\wedge x$. The intuition behind this result, is that a job $J_1$ of size $x$ experiences a system where all job sizes are truncated. Indeed, if another job $J_2$ of size $x+y,y>0$, has received at least $x$ service then $\\mathrm{FB}$ will never dedicate its resources to job $J_2$ while job $J_1$ is incomplete. The mean sojourn time of job $J_1$ can now be salvaged from its own service requirement $x$, the truncated work already in the system upon arrival $W^{(n)}(x)$, and the rate $1-\\r^{(n)}_x$ at which it is expected to be served, yielding~\\eqref{eq:Tx}. As a consequence, the mean sojourn time $\\mathbb{E}[T^{(n)}_{\\mathrm{FB}}]$ of a generic job is given by\n\\begin{equation}\n \\mathbb{E}[T^{(n)}_{\\mathrm{FB}}] = \\int_0^\\infty \\frac{x}{1-\\r^{(n)}_x} \\,\\mathrm{d} F(x) + \\int_0^\\infty \\frac{\\lambda^{(n)} m_2(x)}{2(1-\\r^{(n)}_x)^2} \\,\\mathrm{d} F(x).\n \\label{eq:T}\n\\end{equation}\n\nSecond, we focus attention on the tail behaviour of $T^{(n)}_{\\mathrm{FB}}$. Write $A \\stackrel{d}{=} B$ if $\\P(A\\leq x) = \\P(B\\leq x)$ for all $x\\in\\mathbb{R}$ and let $\\mathcal{L}_x(y)$ denote the time required by the server to empty the system given that job sizes are truncated to $B_i\\wedge x$ and the current amount of work is $y$. The analysis of the tail behaviour is then facilitated by relation~(4.28) in \\citet{kleinrock1976queueingcomputer}, stating \n\\begin{equation}\n T_\\mathrm{FB}^{(n)}(x) \\stackrel{d}{=} \\mathcal{L}_x(W_x^{(n)}+x).\n \\label{eq:Tbusyperiod}\n\\end{equation}\n\nFor both the mean and tail behaviour of $T^{(n)}_{\\mathrm{FB}}$, we take specific interest in systems that experience \\textit{heavy traffic}, that is, systems where $\\rho^{(n)}\\uparrow 1$ as $n\\rightarrow\\infty$. In the current setting, this is equivalent to sequences $\\l^{(n)}$ that converge to $1\/\\mathbb{E}[B]$. Most results in this paper make no assumptions on sequence $\\l^{(n)}$, in which case we drop the superscript $n$ for notational convenience and just state ${\\r\\uparrow 1}$.\n\nThe remainder of this section introduces some notation related to Matuszewska indices and extreme value theory.\n\n\n\\subsection*{Matuszewska indices}\n\\label{subsec:matuszewska}\n\\begin{definition}\nSuppose that $f(\\cdot)$ is positive. \n\\begin{itemize}\n\\item The \\textit{upper Matuszewska index $\\a(f)$} is the infimum of those $\\a$ for which there exists a constant $C=C(\\a)$ such that for each $\\mu^*>1$,\n \\begin{equation}\n \\lim_{x\\rightarrow\\infty} f(\\mu x) \/ f(x) \\leq C \\mu^\\a\n \\label{eq:upperMatuszewska}\n \\end{equation}\n uniformly in $\\mu\\in[1,\\mu^*]$ as $x\\rightarrow\\infty$.\n\\item The \\textit{lower Matuszewska index $\\beta(f)$} is the supremum of those $\\beta$ for which there exists a constant $D=D(\\beta)>0$ such that for each $\\mu^*>1$,\n \\begin{equation}\n \\lim_{x\\rightarrow\\infty} f(\\mu x) \/ f(x) \\geq D \\l^\\beta\n \\label{eq:lowerMatuszewska}\n \\end{equation}\n uniformly in $\\mu\\in[1,\\mu^*]$ as $x\\rightarrow\\infty$.\n\\end{itemize}\n\\end{definition}\nOne may note from the above definitions that $\\b(f) = -\\a(1\/f)$ holds for any positive $f$. Intuitively, a function $f$ with upper and lower Matuszewska indices $\\a(f)$ and $\\b(f)$ is bounded between functions $D x^{\\b(f)}$ and $C x^{\\a(f)}$ for appropriate constants $C,D>0$. More accurately, however, $C$ and $D$ could be unbounded or vanishing functions of $x$. Of special interest is the class of functions that satisfy $\\b(f)=\\a(f)$.\n\\begin{definition}\nA measurable function $f:\\mathbb{R}_{\\geq 0}\\rightarrow \\mathbb{R}_{\\geq 0}$ is \\textit{regularly varying at infinity with index $\\a\\in\\mathbb{R}$} (written $f\\in\\mathrm{RV}_\\a$) if for all $\\mu >0$\n\\begin{equation}\n \\lim_{x\\rightarrow\\infty} f(\\mu x)\/f(x) = \\mu^\\a.\n \\label{eq:regularlyvarying}\n\\end{equation}\nIf~\\eqref{eq:regularlyvarying} holds with $\\a=0$, then $L$ is called \\textit{slowly varying}. If~\\eqref{eq:regularlyvarying} holds with $\\a=-\\infty$, then $L$ is called \\textit{rapidly varying}.\n\\end{definition}\n\n\n\\subsection*{Extreme value theory}\n\\label{subsec:extreme}\nThe following paragraphs introduce some notions and results from extreme value theory.\nThe field of extreme value theory generally aims to assess the probability of an extreme event; however, for our purposes we restrict attention to the limiting distribution of $\\max\\{B_1,\\ldots,B_n\\}$. A key result on this functional is the Fisher-Tippett theorem:\n\n\\begin{theorem}[\\citet{resnick1987extreme}, Proposition 0.3]\nLet $(B_n)_{n\\in\\mathbb{N}}$ be a sequence of i.i.d.\\ random variables and define $M_n:=\\max\\{B_1,\\ldots,B_n\\}$. If there exist norming sequences $c_n>0, d_n\\in\\mathbb{R}$ and some non-degenerate c.d.f\\ $H$ such that\n\\begin{equation}\n c_n^{-1}(M_n - d_n) \\stackrel{d}{\\rightarrow} H,\n \\label{eq:FisherTippett}\n\\end{equation}\nthen $H$ belongs to the type of one of the following three c.d.f.s:\n\\[\n\\begin{array}{rrcl}\n\\text{Fr{\\'e}chet:} & \\Phi_\\a(x) & = & \\left\\{\\begin{array}{ll} 0, &x\\leq 0 \\\\ \\exp\\{-x^{-\\a}\\}, \\quad &x>0\\end{array}\\right. \\quad \\a>0. \\\\\n\\text{Weibull:} & \\Psi_\\a(x) & = & \\left\\{\\begin{array}{ll}\\exp\\{-(-x)^\\a\\}, \\quad &x\\leq 0 \\\\ 1, &x>0\\end{array}\\right. \\quad \\a>0. \\\\\n\\text{Gumbel:} & \\Lambda(x) & = & \\exp\\{-e^{-x}\\} \\quad x\\in\\mathbb{R}. \\\\\n\\end{array}\n\\]\nThe three distributions above are referred to as the \\textit{extreme value distributions}.\n\\end{theorem}\n\nA c.d.f.\\ $F$ is said to be in the \\textit{maximum domain of attraction of $H$} if there exist norming sequences $c_n$ and $d_n$ such that \\eqref{eq:FisherTippett} holds. In this case, we write $F\\in \\mathrm{MDA}(H)$. A large body of literature has identified conditions on $F$ such that $F\\in\\mathrm{MDA}(H)$. Excellent collections of such and related results can be found in \\citet{embrechts1997modelling} and \\citet{resnick1987extreme}. For reasons of convenience, we only mention a characterisation theorem for $\\mathrm{MDA}(\\Lambda)$:\n\\begin{theorem}[\\citet{embrechts1997modelling}, Theorem 3.3.26] \\label{thm:MDAGumbel}\nThe c.d.f. $F$ with right endpoint $x_R\\leq \\infty$ belongs to the maximum domain of attraction of $\\Lambda$ if and only if there exists some $z0, g(t)\\rightarrow 1$ as $x\\uparrow x_R$, and $f(\\cdot)$ is a positive, absolutely continuous function (with respect to the Lebesgue measure) with density $f'(x)$ having $\\lim_{x\\uparrow x_R} f'(x) = 0$.\n\nIf $F\\in\\mathrm{MDA}(\\Lambda)$, then the norming constants can be chosen as $c_n = f(d_n)$ and $d_n=\\Finv(1-n^{-1})$. A possible choice for the function $f(\\cdot)$ is $f(\\cdot) = 1\/h^*(\\cdot)$.\n\\end{theorem}\nThe function $f(\\cdot)$ in the above definition is unique up to asymptotic equivalence. We refer to $f$ as the \\textit{auxiliary function} of $\\overline{F}$. Also, we note the following property of $f(\\cdot)$:\n\n\\begin{lemma}[\\citet{resnick1987extreme}, Lemma~1.2] \\label{lem:xfx}\nSuppose that $f(\\cdot)$ is an absolutely continuous auxiliary function with $f'(x)\\rightarrow 0$ as $x\\uparrow x_R$.\n\\begin{itemize}\n\\item If $x_R=\\infty$, then $\\lim_{x\\rightarrow\\infty} \\frac{f(x)}{x} = 0$.\n\\item If $x_R<\\infty$, then $\\lim_{x\\uparrow x_R} \\frac{f(x)}{x_R-x} = 0$.\n\\end{itemize}\n\\end{lemma}\n\nThis section's final lemma shows that $G\\in\\mathrm{MDA}(\\Lambda)$ whenever $F\\in\\mathrm{MDA}(\\Lambda)$:\n\\begin{lemma} \\label{lem:FandGareGumbel}\nIf $F\\in\\mathrm{MDA}(\\Lambda)$, then $G\\in\\mathrm{MDA}(\\Lambda)$ and any auxiliary function for $F$ is also an auxiliary function for $G$.\n\\end{lemma}\n\\begin{proof}\nAccording to Theorem~3.3.27 in \\citet{embrechts1997modelling}, $G\\in\\mathrm{MDA}(\\Lambda)$ with auxiliary function $f(\\cdot)$ if and only if \n$\n \\lim_{x\\uparrow x_R} \\bar{G}(x+t f(x))\/\\bar{G}(x) = e^{-t}\n$\nfor all $t\\in\\mathbb{R}$. It is straightforward to check that the above relation holds for any auxiliary function $f(\\cdot)$ of $F$ by using l'H{\\^o}pital and $\\lim_{x\\uparrow x_R} f'(x) = 0$.\n\\end{proof}\n\n\n\\subsection*{Asymptotic relations}\nLet $f(\\cdot)$ and $g(\\cdot)$ denote two positive functions and $A$ and $B$ two random variables. We write $f \\sim g$ if $\\lim_{z\\uparrow z^*} f(z)\/g(z)=1$, where the appropriate limit $z\\uparrow z^*$ depends on and should be clear from the context; it usually equals $x\\uparrow x_R$ or ${\\r\\uparrow 1}$. Similarly, we adopt the conventions $f=o(g)$ if $\\limsup_{z\\uparrow z^*} f(z)\/g(z)=0$, $f=\\O(g)$ if $\\limsup_{z\\uparrow z^*} f(z)\/g(z) < \\infty$ and $f=\\Theta(g)$ if $0< \\liminf_{z\\uparrow z^*} f(z)\/g(z) \\leq \\limsup_{z\\uparrow z^*} f(z)\/g(z) < \\infty$. We write $A\\leq_{st} B$ if the relation $P(A>x) \\leq \\P(B>x)$ is satisfied for all $x\\in\\mathbb{R}$.\n\nFinally, the complementary error function is defined as $\\erfc(x) := 2\\pi^{-1\/2}\\int_x^\\infty e^{-u^2}\\,\\mathrm{d} u$.\n\n\n\n\n\n\\section{Main results and discussion}\n\\label{sec:mainresults}\nThis section presents and discusses our main results. Theorems~\\ref{thm:ETMatuszewska} and \\ref{thm:ETextreme} consider the asymptotic behaviour of the mean sojourn time $\\mathbb{E}[T_\\mathrm{FB}]$ for various classes of distributions. Theorem~\\ref{thm:TmeanGumbel} connects the asymptotic behaviour of $\\overline{F}(\\inv{G}(\\r))$ to the literature on extreme value theory. As a consequence, the expressions obtained in Theorem~\\ref{thm:ETextreme} can be specified for many distributions in $\\mathrm{MDA}(\\Lambda)$. Theorem~\\ref{thm:ToverET} shifts focus to the distribution of $T_\\mathrm{FB}$ and states that the scaled sojourn time $T_\\mathrm{FB}\/\\mathbb{E}[T_\\mathrm{FB}]$ tends to zero in probability. Instead, Theorem~\\ref{thm:Ttail} shows that a certain fraction of jobs experiences a sojourn time of order $(1-\\r)^{-2}$. This result is achieved through the Laplace transform of the remaining sojourn time $T^*_\\mathrm{FB}$, for which we give an integral presentation. The proofs of the theorems are postponed to later sections.\n\nRecall that $\\overline{F}(\\inv{G}(\\r)) = \\mathbb{E}[B](1-\\r)h^*(\\inv{G}(\\r))$. Our first theorem presents the growth rate of $\\mathbb{E}[T_\\mathrm{FB}]$. \n\\begin{theorem} \\label{thm:ETMatuszewska}\nAssume that either $x_R=\\infty$ and $-\\infty<\\beta(\\overline{F})\\leq \\a(\\overline{F}) < -2$, or that $x_R<\\infty$ and $-\\infty<\\beta(\\overline{F}(x_R-(\\cdot)^{-1}))\\leq \\a(\\overline{F}(x_R-(\\cdot)^{-1})) < 0$. Then the relations\n\\begin{equation}\n \\mathbb{E}[T_\\mathrm{FB}] = \\Theta\\left(\\frac{\\overline{F}(\\inv{G}(\\r))}{(1-\\r)^2}\\right) = \\Theta\\left(\\frac{h^*(\\inv{G}(\\r))}{1-\\r}\\right)\n\\end{equation}\nhold as ${\\r\\uparrow 1}$, where $\\lim_{{\\r\\uparrow 1}} h^*(\\inv{G}(\\r)) = 0$ if $x_R=\\infty$ and $\\lim_{{\\r\\uparrow 1}} h^*(\\inv{G}(\\r)) = \\infty$ if $x_R<\\infty$. \nAlternatively, assume $x_R=\\infty$ and $\\beta(\\overline{F}(x))> -2$. Then the relation\n\\begin{equation}\n \\mathbb{E}[T_\\mathrm{FB}] = \\Theta\\left(\\log \\frac{1}{1-\\r}\\right)\n\\end{equation}\nholds as ${\\r\\uparrow 1}$. \n\\end{theorem}\nTheorem~\\ref{thm:ETMatuszewska} shows that the behaviour of $\\mathbb{E}[T_\\mathrm{FB}]$ is fundamentally different for $\\a(\\overline{F})<-2$ and $\\beta(\\overline{F}(x))> -2$. In the first case, the variance of $F$ is bounded and therefore the expected remaining busy period duration is of order $\\Theta((1-\\r)^{-2})$. Our analysis roughly shows that all jobs of size $\\inv{G}(\\r)$ and larger will remain in the system until the end of the busy period, and hence experience a sojourn time of order $\\Theta((1-\\r)^{-2})$. The theorem shows that, as the work intensity increases to unity, the contribution of these jobs to the average sojourn time determines the overall average sojourn time. \n\nThe above argumentation does not apply in case $\\beta(\\overline{F}(x))> -2$, since then the expected remaining busy period duration is infinite. It turns out that in this case the mean sojourn time of a large job of size $x$ is of the same order as the time that the job is in service, which has expectation $x\/(1-\\r_x)$. The result follows after integrating over the job size distribution.\n\nAdditionally, it can be shown that the statements in Theorem~\\ref{thm:ETMatuszewska} also hold if $F\\in\\mathrm{MDA}(\\Lambda)$, which is a special case of $\\a(\\overline{F})=\\beta(\\overline{F})=-\\infty$ or $\\a(\\overline{F}(x_R-(\\cdot)^{-1}))=\\beta(\\overline{F}(x_R-(\\cdot)^{-1})=-\\infty$. In this case, as well as in case $\\overline{F}(\\cdot)$ or $\\overline{F}(x_R-(\\cdot)^{-1})$ is regularly varying, one can show that $(1-\\r)^2\\mathbb{E}[T_\\mathrm{FB}]\/\\overline{F}(\\inv{G}(\\r))$ converges. Theorem~\\ref{thm:ETextreme} specifies Theorem~\\ref{thm:ETMatuszewska} for the aforementioned cases, as well as for distributions with an atom in their endpoint\n\n\\begin{theorem} \\label{thm:ETextreme}\nThe following relations hold as ${\\r\\uparrow 1}$:\n\\begin{enumerate}[(i)]\n\\item If $F\\in\\mathrm{MDA}(\\Phi_\\a), \\a\\in(1,2)$, then\n \n \\mathbb{E}[T_\\mathrm{FB}] \\sim \\frac{\\a}{2-\\a}\\mathbb{E}[B] \\log\\frac{1}{1-\\r}.\n $\n\\item If $F\\in\\mathrm{MDA}(H)$, then\n \n \\mathbb{E}[T_\\mathrm{FB}] \\sim \\frac{r(H) \\mathbb{E}[B^*] \\overline{F}(\\inv{G}(\\r))}{(1-\\r)^2} = \\frac{r(H) \\mathbb{E}[B^2] h^*(\\inv{G}(\\r))}{2(1-\\r)}\n $\n where\n \\begin{equation}\n r(H) = \\left\\{\\begin{array}{ll}\n \\frac{\\pi\/(\\a-1)}{\\sin(\\pi\/(\\a-1))} \\frac{\\a}{\\a-1} \\quad & \\text{ if } H=\\Phi_\\a, \\a>2, \\\\\n 1 & \\text{ if } H=\\Lambda, \\text{ and } \\\\\n \\frac{\\pi\/(\\a+1)}{\\sin(\\pi\/(\\a+1))} \\frac{\\a}{\\a+1} & \\text{ if } H=\\Psi_\\a, \\a>0.\n \\end{array}\\right.\n \\label{eq:rH}\n \\end{equation}\n Additionally, if $H=\\Phi_\\a, \\a>2,$ then $\\lim_{{\\r\\uparrow 1}} h^*(\\inv{G}(\\r)) = 0$, whereas if either $H=\\Lambda$ and $x_R<\\infty$ or if $H=\\Psi_\\a,\\a>0,$ then $\\lim_{{\\r\\uparrow 1}} h^*(\\inv{G}(\\r)) = \\infty$.\n\\item If $F$ has an atom in $x_R<\\infty$, say $\\lim_{\\d\\downarrow 0} \\overline{F}(x_R-\\d)=p>0$, then\n \n \\mathbb{E}[T_\\mathrm{FB}] \\sim \\frac{p\\mathbb{E}[B^*]}{(1-\\r)^2}.\n $\n\\end{enumerate}\n\\end{theorem}\n\nThe expressions in Theorems~\\ref{thm:ETMatuszewska} and \\ref{thm:ETextreme} give insight into the asymptotic behaviour of $\\mathbb{E}[T_\\mathrm{FB}]$. The following corollary shows that the asymptotic expressions above may be specified further if the job sizes are Pareto distributed. This extends the result by \\citet{bansal2006handling}, who derived the growth factor of $\\mathbb{E}[T_\\mathrm{FB}]$ but not the exact asymptotics.\n\\begin{corollary} \\label{cor:ETPareto}\nAssume $\\overline{F}(x) = (x\/x_L)^{-\\a}, x\\geq x_L$. Then the relations\n\\begin{equation}\n \\mathbb{E}[T_\\mathrm{FB}] \\sim \\left\\{\\begin{array}{ll}\n \\frac{\\a}{2-\\a}\\mathbb{E}[B] \\log\\frac{1}{1-\\r} & \\text{if } \\a\\in(1,2), \\\\\n \\frac{\\pi\/(\\a-1)}{2\\sin(\\pi\/(\\a-1))} \\frac{\\mathbb{E}[B^2]\\a^{\\frac{\\a}{\\a-1}}}{x_L(1-\\r)^{\\frac{\\a-2}{\\a-1}}} \\quad & \\text{if } \\a\\in(2,\\infty),\n \\end{array}\\right.\n\\end{equation}\nhold as ${\\r\\uparrow 1}$.\n\\end{corollary}\n\\begin{proof}\nOne may derive that $\\bar{G}(x) = \\frac{1}{\\a}\\left(\\frac{x}{x_L}\\right)^{1-\\a}$ for $x\\geq x_L$ and sequentially that $h^*(x)=\\frac{\\a-1}{x}$ for $x\\geq x_L$ and $\\inv{G}(\\r) = x_L(\\a(1-\\r))^{\\frac{-1}{\\a-1}}$ for $\\r\\geq 1-1\/\\a$. The result then follows from Theorem\n\\ref{thm:ETextreme}.\n\\end{proof}\n\nCorollary~\\ref{cor:ETPareto} exemplifies that the asymptotic growth of $\\mathbb{E}[T_\\mathrm{FB}]$ may be specified in some cases. However, it is often non-trivial to analyse the behaviour of $\\overline{F}(\\inv{G}(\\r))$ or equivalently $h^*(\\inv{G}(\\r))$. Theorem~\\ref{thm:TmeanGumbel} aims to overcome this problem if $F\\in\\mathrm{MDA}(\\Lambda)$ by presenting a relation between $h^*(\\inv{G}(\\r))$ and norming constants $c_n$ of $F$, which can often be found in the large body of literature on extreme value theory. \n\n\\begin{theorem} \\label{thm:TmeanGumbel}\n Assume $F\\in \\mathrm{MDA}(\\Lambda)$ and $x_R=\\infty$, and let $c_n$, $d_n$ be such that $c_n^{-1}(\\overline{F}^n-d_n) \\stackrel{d}{\\rightarrow} \\Lambda$. Define $\\l^{(n)} = (1-n^{-1})\/\\mathbb{E}[B]$ so that $\\r^{(n)}=1-n^{-1}$.\n \\begin{enumerate}[(i)]\n \\item If there exists $\\a>0$ and a slowly varying function $l(x)$ such that $-\\log \\overline{F}(x)\\sim l(x) x^\\a$ as $x\\rightarrow\\infty$, then $h^*(x)\\sim \\a l(x) x^{\\a-1}$ if and only if\n \\begin{equation}\n \\inf_{\\l\\downarrow 1}\\liminf_{x\\rightarrow\\infty}\\inf_{t\\in[1,\\l]}\\{\\log h^*(tx) - \\log h^*(x)\\} \\geq 0.\n \\label{eq:tauberian}\n \\end{equation}\n If \\eqref{eq:tauberian} holds, then $\\mathbb{E}[T_\\mathrm{FB}^{(n)}] \\sim \\frac{\\mathbb{E}[B^2]}{2(1-\\r^{(n)})c_n}$ as $n\\rightarrow\\infty$.\n \\item If there exists a function $l(x):[0,\\infty)\\rightarrow \\mathbb{R}, \\liminf_{x\\rightarrow \\infty} l(x) >1$ such that for all $\\l>0$\n \\begin{equation}\n \\lim_{x\\rightarrow\\infty} \\frac{-\\log\\overline{F}(\\l x)+\\log\\overline{F}(x)}{l(x)} = \\log(\\l)\n \\end{equation}\n and $L=\\lim_{x\\rightarrow \\infty} \\frac{\\log(x)}{l(x)}$ exists in $[0,\\infty]$, then $\\lim_{n\\rightarrow\\infty} \\frac{2(1-\\r^{(n)})c_n}{\\mathbb{E}[B^2]} \\mathbb{E}[T_\\mathrm{FB}^{(n)}] = e^{-L}$.\n \\end{enumerate}\n The same results hold if $x_R<\\infty$, provided that the $\\overline{F}(\\cdot)$ and $h^*(\\cdot)$ in (i) and (ii) are replaced by $\\overline{F}(x_R-\\frac{1}{\\cdot})$ and $h^*(x_R-\\frac{1}{\\cdot})\/(\\cdot)^2$, respectively.\n\\end{theorem}\n\\@ifnextchar[{\\@with}{\\@without}{Condition~\\eqref{eq:tauberian} in part (i) of Theorem~\\ref{thm:TmeanGumbel} is a \\textit{Tauberian condition}, and origins from Theorem~1.7.5 in \\citet{bingham1989regular}. A Tauberian theorem makes assumptions on a transformed function (here $h^*$), and uses these assumptions to deduce the asymptotic behaviour of that transform. It is non-restrictive in the sense the result in the theorem holds, i.e.\\ if $h^*(x) \\sim\\a l(x) x^{\\a-1}$, then obviously condition~\\eqref{eq:tauberian} is met and therefore the Tauberian condition does not restrict the class of functions $F$ to which the theorem applies. However, condition~\\eqref{eq:tauberian} is necessary for the result to hold. The interested reader is referred to Section~XIII.5 in \\citet{feller1968introduction} and Section~1.7 in \\citet{bingham1989regular}.}\n\nTheorem~\\ref{thm:MDAGumbel} implies that $c_n \\sim 1\/h^*(\\inv{G}(1-n^{-1}))$ for many distributions in $\\mathrm{MDA}(\\Lambda)$. As $c_n$ may be chosen as $1\/h^*(\\Finv(1-n^{-1}))$, Theorem~\\ref{thm:TmeanGumbel} implicitly states conditions under which $\\lim_{n\\rightarrow \\infty} h^*(\\inv{G}(1-n^{-1}))\/h^*(\\Finv(1-n^{-1})) = \\lim_{y\\uparrow 1} (1-y)^{-2} \\overline{F}(\\inv{G}(y))\\bar{G}(\\Finv(y))$ exists, and exploits this limit to write $\\mathbb{E}[T^{(n)}_{\\mathrm{FB}}]$ as function of $c_n$ rather than of $h^*(\\inv{G}(1-n^{-1}))$. As to illustrate the implications of Theorem~\\ref{thm:TmeanGumbel}, the exact asymptotic behaviour of several well-known distributions is presented in Table~\\ref{tab:Gumbel}.\n\\begin{sidewaystable}\n\\[\\begin{array}{|lll|c|l|}\n\\hline\n\\text{Distribution} & \\text{c.c.d.f.\\ $\\overline{F}$ or p.d.f.\\ $F'$} && L & \\mathbb{E}[T_\\mathrm{FB}] \\sim \\\\\n\\hline\n\\text{Exponential-like} & \\overline{F}(x) \\sim K e^{-\\mu x} & K,\\mu>0 & - & \\frac{\\mathbb{E}[B^2]\\mu}{2(1-\\r)} \\\\\n\\text{Weibull-like} &\\overline{F}(x) \\sim K x^\\a e^{-\\mu x^\\beta} & K,\\mu,\\beta>0, \\a\\in\\mathbb{R} & - & \\frac{\\beta \\mu^{1\/\\beta} \\mathbb{E}[B^2]}{2(1-\\r)\\log\\left(\\frac{1}{1-\\r}\\right)^{1\/\\beta-1}} \\\\\n\\text{Gamma} &F'(x) = \\frac{\\beta^\\a}{\\Gamma(\\a)}x^{\\a-1}e^{-\\beta x} & \\a,\\b>0 & - & \\frac{\\mathbb{E}[B^2] \\beta}{2(1-\\r)} \\\\\n\\text{Normal} &F'(x) = \\frac{1}{\\sqrt{2\\pi}} e^{-x^2\/2} & & - & \\frac{\\mathbb{E}[B^2] \\log\\left(\\frac{1}{1-\\r}\\right)^{1\/2}}{\\sqrt{2} (1-\\r)} \\\\\n\\text{Lognormal} & F'(x) = \\frac{1}{\\sqrt{2\\pi}\\sigma x}e^{-(\\log(x)-\\mu)^2\/(2\\sigma^2)} & \\sigma>0, \\mu\\in\\mathbb{R} & \\sigma^2 & \\frac{e^{-\\sigma^2} \\mathbb{E}[B^2] \\log\\left(\\frac{1}{1-\\r}\\right)^{1\/2} }{\\sigma \\sqrt{2}(1-\\r)\\exp\\left[\\mu + \\sigma\\left(\\sqrt{2\\log\\left(\\frac{1}{1-\\r}\\right)}-\\frac{\\log(4\\pi) + \\log\\log(1\/(1-\\r))}{2 \\sqrt{2\\log(1\/(1-\\r))}}\\right)\\right]} \\\\\n\\text{Finite Exponential} & \\overline{F}(x) = K e^{-\\frac{\\mu}{x_R-x}} & K,\\mu>0, x0, x>1 & \\frac{1}{2\\beta} & \\frac{e^{-\\frac{1}{2\\beta}}\\mathbb{E}[B^2] \\sqrt{\\beta\\log\\left(\\frac{1}{1-\\r}\\right)}}{(1-\\r)\\exp\\left[-\\frac{\\a+1}{\\beta}+\\sqrt{\\frac{\\log\\left(\\frac{1}{1-\\r}\\right)}{\\beta}}\\right]} \\\\\n & \\qquad \\times e^{-(\\beta\\log(x)^2+(\\a+1)\\log(x))} &&& \\\\\n\\text{Benktander-II} & \\overline{F}(x) = x^{-(1-\\beta)} e^{-\\frac{\\a}{\\beta}(x^\\beta-1)} & \\a>0, 0<\\beta<1, x>1 & - & \\frac{\\a^{1\/\\beta}\\mathbb{E}[B^2]}{2\\beta^{1\/\\beta-1}(1-\\r)\\log\\left(\\frac{1}{1-\\r}\\right)^{1\/\\beta-1}} \\\\\n\\hline\n\\end{array}\\]\n\\caption{Asymptotic expressions for the mean sojourn time for several well-known distributions in $\\mathrm{MDA}(\\Lambda)$, characterized by either their tail distribution or their probability density function (p.d.f.). These expressions follow from Table~3.4.4 in \\citet{embrechts1997modelling} through Theorem~\\ref{thm:TmeanGumbel}, where it is assumed that relation~\\eqref{eq:tauberian} holds.}\n\\label{tab:Gumbel}\n\\end{sidewaystable}\n\nWe take a brief moment to compare the asymptotic mean sojourn time under $\\mathrm{FB}$ to that under $\\mathrm{SRPT}$ in $\\mathrm{M\/GI\/1}$ models. Clearly, $\\mathrm{FB}$ can perform no better than $\\mathrm{SRPT}$ due to $\\mathrm{SRPT}$'s optimality \\citep{schrage1968letter}. The ratio of their respective mean sojourn time is shown to be unbounded if the job sizes are Exponentially distributed or if the job size distribution has finite support\n\\citep{kleinrock1976queueingcomputer, bansal2005average, nuyens2008foreground, lin2011heavy}, but bounded if the job sizes are Pareto distributed \\citep{bansal2006handling, lin2011heavy}. To the best of the authors' knowledge, no results of this nature are known if job sizes are Weibull distributed.\n\nThe following corollary specifies the asymptotic advantage of $\\mathrm{SRPT}$ over $\\mathrm{FB}$ if the job sizes are Pareto distributed, and presents the first such results for Weibull distributed job sizes. Its statements follow directly from Corollaries~1 and 2 in \\citet{lin2011heavy} and the earlier results in this section.\n\\begin{corollary}\n\\label{cor:FBvsSRPT}\nThe following relations hold as ${\\r\\uparrow 1}$:\n\\begin{enumerate}[(i)]\n\\item If $\\overline{F}(x) = (x\/x_L)^{-\\a}, x\\geq x_L>0$ and $\\a\\in(1,2)$, then $\\mathbb{E}[T_\\mathrm{FB}]\/\\mathbb{E}[T_{\\mathrm{SRPT}}] \\sim \\a^2$.\n\\item If $\\overline{F}(x) = (x\/x_L)^{-\\a}, x\\geq x_L>0$ and $\\a>2$, then $\\mathbb{E}[T_\\mathrm{FB}]\/\\mathbb{E}[T_{\\mathrm{SRPT}}] \\sim \\a^{\\frac{\\a}{\\a-1}}$.\n\\item If $\\overline{F}(x) = e^{-\\mu x^\\beta}, x\\geq 0$ and $\\beta>0$, then $\\mathbb{E}[T_\\mathrm{FB}]\/\\mathbb{E}[T_{\\mathrm{SRPT}}] \\sim \\beta \\log\\left(\\frac{1}{1-\\r}\\right)$.\n\\end{enumerate}\n\\end{corollary}\n\nNow that the asymptotic behaviour of the mean sojourn time under $\\mathrm{FB}$ has been quantified, it is natural to investigate more complex characteristics. One such characteristic is the behaviour of the tail of the sojourn time distribution, where one usually starts by analysing the distribution of the sojourn time normalized by its mean, $T_\\mathrm{FB}\/\\mathbb{E}[T_\\mathrm{FB}]$. The following theorem indicates that this random variable converges to zero in probability, meaning that almost every job experiences a sojourn time that is significantly shorter than the mean sojourn time as ${\\r\\uparrow 1}$:\n\\begin{theorem} \\label{thm:ToverET}\nIf either\n\\begin{itemize}\n\\item $x_R=\\infty$ and either $\\beta(\\overline{F})>-2$ or $-\\infty<\\beta(\\overline{F})\\leq \\a(\\overline{F}) < -2$, or\n\\item $x_R<\\infty$ and $-\\infty<\\beta(\\overline{F}(x_R-(\\cdot)^{-1}))\\leq \\a(\\overline{F}(x_R-(\\cdot)^{-1})) < 0$, or\n\\item $F\\in\\mathrm{MDA}(\\Lambda)$,\n\\end{itemize}\nthen $\\frac{T_\\mathrm{FB}}{\\mathbb{E}[T_\\mathrm{FB}]}\\stackrel{p}{\\rightarrow} 0$ as ${\\r\\uparrow 1}$.\n\\end{theorem}\n\nTheorem~\\ref{thm:ToverET} indicates that a decreasing fraction of jobs experiences a sojourn time of at least duration $\\mathbb{E}[T_\\mathrm{FB}]$. Our final main result aims to specify both the size of this fraction, and the growth factor of the associated jobs' sojourn time.\n\nThe intuition from the proof of Theorem~\\ref{thm:ETMatuszewska} suggests that $T_\\mathrm{FB}$ scales as $(1-\\r)^{-2}$, but only for jobs of size at least $\\inv{G}(\\r)$. This makes it conceivable that the scaled probability $\\P((1-\\r)^2 T_\\mathrm{FB} > y)\/\\overline{F}(\\inv{G}(\\r))$ may be of $\\Theta(1)$ as ${\\r\\uparrow 1}$. Theorem~\\ref{thm:Ttail} confirms this hypothesis, and additionally shows that the residual sojourn time $T_\\mathrm{FB}^*$ with density $\\P(T_\\mathrm{FB}>x)\/\\mathbb{E}[T_\\mathrm{FB}]$ scales as $(1-\\r)^{-2}$.\n\\begin{theorem} \\label{thm:Ttail}\nAssume $F\\in\\mathrm{MDA}(H)$, where $H$ is an extreme value distributions with finite $(2+\\varepsilon)$-th moment for some $\\varepsilon>0$. Let $r(H)$ be as in relation~\\eqref{eq:rH}. Then $(1-\\r)^2T^*_\\mathrm{FB}$ converges to a non-degenerate random variable with monotone density $g^*$ as ${\\r\\uparrow 1}$, and\n\\begin{equation}\n \\lim_{{\\r\\uparrow 1}} \\frac{\\P((1-\\r)^2T_\\mathrm{FB}>y)}{r(H)\\mathbb{E}[B^*]\\overline{F}(\\inv{G}(\\r))} = g^*(y)\n\\end{equation}\nalmost everywhere. Here,\n\\begin{align}\n g^*(t) &= \\int_0^1 r(H)^{-1} 8 \\nu g(t,\\nu) \\left(\\frac{1-\\nu}{\\nu}\\right)^{p(H)} \\,\\mathrm{d} \\nu, \\\\\n g(t,\\nu) &= \\frac{e^{-\\frac{t}{4\\mathbb{E}[B^*]\\nu^2}}}{4\\mathbb{E}[B^*]\\nu^2} \\left(\\frac{\\sqrt{t}}{\\nu \\sqrt{\\pi \\mathbb{E}[B^*]}} - \\frac{t}{2\\mathbb{E}[B^*]\\nu^2}e^{\\frac{t}{4\\mathbb{E}[B^*]\\nu^2}} \\erfc\\left(\\frac{1}{2\\nu}\\sqrt{\\frac{t}{\\mathbb{E}[B^*]}}\\right)\\right),\n\\end{align}\nand $p(H)=\\frac{\\a}{\\a-1}$ if $H=\\Phi_\\a,\\a>2$; $p(H)=1$ if $H=\\Lambda$ and $p(H)=\\frac{\\a}{\\a+1}$ if $H=\\Psi_\\a,\\a>0$.\n\\end{theorem}\n\n\nAll theorems presented in this section are now proven in order. First, Theorems~\\ref{thm:ETMatuszewska} and \\ref{thm:ETextreme} are proven in Section~\\ref{sec:Tmean}. Then, Theorem~\\ref{thm:TmeanGumbel} is justified in Section~\\ref{sec:TmeanGumbel}. Finally, Sections~\\ref{sec:ToverET} and \\ref{sec:Ttail} respectively validate Theorems~\\ref{thm:ToverET} and \\ref{thm:Ttail}.\n\n\n\n\n\n\\section{Asymptotic behaviour of the mean sojourn time} \n\\label{sec:Tmean}\nIn this section, we prove Theorems~\\ref{thm:ETMatuszewska} and \\ref{thm:ETextreme} in order. The intuition behind the theorems is that jobs of size $x$ can only be completed once the server has finished processing of all jobs of size at most $x$. Additionally, jobs of size $x$ experience a system with job sizes $B_i\\wedge x$ since no job will receive more than $x$ units of processing as long as there are size $x$ jobs in the system. One thus expects all jobs of size $x$ to stay in the system for the duration of a remaining busy period in the truncated system, which is expected to last for \n\\Theta(\\mathbb{E}[(B\\wedge x)^2]\/(1-\\r_x)^2)$ time. \n\nNow, if $\\mathbb{E}[B^2]<\\infty$ and $x_\\r^\\nu$ is such that $(1-\\r)\/(1-\\r_{x_\\r^\\nu})=\\nu\\in(1-\\r,1)$, then one can see from \\eqref{eq:Tx} that\n\\begin{equation}\n (1-\\r)^2\\mathbb{E}[T_\\mathrm{FB}(x_\\r^\\nu)] = \\nu(1-\\r)x_\\r^\\nu + \\nu^2 \\frac{\\l m_2(x_\\r^\\nu)}{2}.\n \\label{eq:Txscaled}\n\\end{equation}\nIt turns out that the asymptotic behaviour of $(1-\\r)^2\\mathbb{E}[T_\\mathrm{FB}]$ is now determined by the fraction of jobs for which $\\nu$ takes values away from zero.\n\nIf instead $\\mathbb{E}[B^2]=\\infty$, then it will be shown that the growth rate of the second term in \\eqref{eq:Tx} is bounded by the growth rate of $x\\bar{G}(x)$. It turns out that the sojourn time is of the same order as the time that a job receives service, which is of order $\\Theta(x\/(1-\\r_x))$.\n\nBoth theorems follow after integrating $\\mathbb{E}[T_\\mathrm{FB}(x)]$ over all possible values of $x$, as shown in \\eqref{eq:T}. By integrating by parts, we find that the first integral in \\eqref{eq:T} can be rewritten as\n\\begin{align*}\n \\int_0^\\infty \\frac{x}{1-\\r_x} \\,\\mathrm{d} F(x) \n &= \\int_0^\\infty \\frac{\\overline{F}(x)}{1-\\r_x} \\,\\mathrm{d} x + \\l \\int_0^\\infty \\frac{x \\overline{F}(x)^2}{(1-\\r_x)^2}\\,\\mathrm{d} x \\\\\n &= \\frac{1}{\\l} \\log\\frac{1}{1-\\r} + \\l \\int_0^\\infty \\frac{x \\overline{F}(x)^2}{(1-\\r_x)^2}\\,\\mathrm{d} x.\n\\end{align*}\nSimilarly, the second integral can be rewritten as\n\\begin{align*}\n \\int_0^\\infty \\frac{\\lambda m_2(x)}{2(1-\\r_x)^2} \\,\\mathrm{d} F(x) \n &= \\lambda \\int_0^\\infty \\frac{x\\overline{F}(x)^2}{(1-\\r_x)^2} \\,\\mathrm{d} x + \\lambda^2 \\int_0^\\infty \\frac{m_2(x) \\overline{F}(x)^2}{(1-\\r_x)^3} \\,\\mathrm{d} x,\n\\end{align*}\nand therefore\n\\begin{align*}\n \\mathbb{E}[T_\\mathrm{FB}] \n &= \\frac{1}{\\l} \\log\\frac{1}{1-\\r} + 2\\l \\int_0^\\infty \\frac{x \\overline{F}(x)^2}{(1-\\r_x)^2}\\,\\mathrm{d} x + \\lambda^2 \\int_0^\\infty \\frac{m_2(x) \\overline{F}(x)^2}{(1-\\r_x)^3} \\,\\mathrm{d} x \\\\\n &= \\frac{\\mathbb{E}[B]}{\\r} \\log\\frac{1}{1-\\r} + 2\\r \\int_0^\\infty \\frac{x \\overline{F}(x)}{(1-\\r_x)^2}\\,\\mathrm{d} G(x) + \\frac{\\r^2}{\\mathbb{E}[B]} \\int_0^\\infty \\frac{m_2(x) \\overline{F}(x)}{(1-\\r_x)^3} \\,\\mathrm{d} G(x). \\stepcounter{equation}\\tag{\\theequation} \\label{eq:Tmain}\n\\end{align*}\n\nWe will now derive Theorems~\\ref{thm:ETMatuszewska} and \\ref{thm:ETextreme} from this relation.\n\n\n\\subsection{General Matuszewska indices}\nThis section proves Theorem~\\ref{thm:ETMatuszewska}. Relation~\\eqref{eq:Tmain} will be analysed separately for the cases $-\\infty<\\beta(\\overline{F}) \\leq \\a(\\overline{F}) < -2$ and $-2<\\beta(\\overline{F}) \\leq \\a(\\overline{F}) < 1$, which will be referred to as the finite and the infinite variance case, respectively. The finite variance case also considers $-\\infty<\\beta(\\overline{F}(x_R-(\\cdot)^{-1})$. Note that we always have $\\beta(\\overline{F}(x_R-(\\cdot)^{-1}))\\leq \\a(\\overline{F}(x_R-(\\cdot)^{-1})) \\leq 0$ since $\\overline{F}(x_R-(\\cdot)^{-1})$ is non-increasing. Prior to further analysis, however, we introduce several results that will facilitate the analysis. \n\n\\begin{lemma} \\label{lem:onesidedMatuszewskaproduct}\nLet $f_1(\\cdot), f_2(\\cdot)$ be positive functions.\n\\begin{enumerate}[(i)]\n\\item If $\\a(f_1),\\a(f_2)<\\infty$, then $\\a(f_1\\cdot f_2)\\leq \\a(f_1)+\\a(f_2)$ and, assuming that $f_1$ is non-decreasing, $\\a(f_1\\circ f_2)\\leq \\a(f_1)\\cdot \\a(f_2)$.\n\\item If $\\beta(f_1),\\beta(f_2)>-\\infty$, then $\\beta(f_1\\cdot f_2)\\geq \\beta(f_1)+\\beta(f_2)$ and, assuming that $f_1$ is non-increasing, $\\beta(f_1\\circ f_2)\\geq \\beta(f_1)\\cdot \\beta(f_2)$.\n\\end{enumerate}\n\\end{lemma}\n\\begin{lemma} \\label{lem:onesidedMatuszewskavanish}\nLet $f$ be positive. If $\\a(f)<0$, then $\\lim_{{x\\rightarrow\\infty}} f(x) = 0$.\n\\end{lemma}\n\\begin{lemma}[\\citet{bingham1989regular}, Theorem~2.6.1] \\label{lem:onesidedMatuszewskaPI}\nLet $f$ be positive and locally integrable on $[X,\\infty)$. Let $g(x):= \\int_X^x f(t)\/t \\,\\mathrm{d} t$. If $\\beta(f)>0$, then $\\liminf_{{x\\rightarrow\\infty}} f(x) \/ g(x) >0$.\n\\end{lemma}\n\\begin{lemma}[\\citet{bingham1989regular}, Theorem~2.6.3] \\label{lem:onesidedMatuszewskaPD}\nLet $f$ be positive and measurable. Let $g(x):= \\int_x^\\infty f(t)\/t \\,\\mathrm{d} t$. \n\\begin{enumerate}[(i)]\n\\item If $\\a(f)<0$, then $g(x)<\\infty$ for all large $x$.\n\\item If $\\b(f)>-\\infty$, then $\\limsup_{{x\\rightarrow\\infty}} f(x)\/g(x) < \\infty$.\n\\end{enumerate}\n\\end{lemma}\n\\begin{lemma} \\label{lem:barGMatuszewska}\nIf $x_R=\\infty$, then $\\a(\\bar{G}) \\leq \\a(\\overline{F}) + 1$ and $\\beta(\\bar{G}) \\geq \\beta(\\overline{F}) + 1$. Alternatively, if $x_R<\\infty$, then $\\a(\\bar{G}(x_R-(\\cdot)^{-1})) \\leq \\a(\\overline{F}(x_R-(\\cdot)^{-1})) - 1$ and $\\beta(\\bar{G}(x_R-(\\cdot)^{-1})) \\geq \\beta(\\overline{F}(x_R-(\\cdot)^{-1})) - 1$.\n\\end{lemma}\n\\begin{lemma}\\label{lem:inverseMatuszewska2}\nIf $x_R=\\infty$ and $\\beta(\\overline{F}) >-\\infty$, then $\\beta(\\inv{G}(1-(\\cdot)^{-1})) \\geq -1\/(\\beta(\\overline{F})+1)$ and $\\alpha(\\inv{G}(1-(\\cdot)^{-1})) \\leq -1\/(\\alpha(\\overline{F})+1)$. Alternatively, if $x_R<\\infty$ and $\\beta(\\overline{F}(x_R-(\\cdot)^{-1})) >-\\infty$, then $\\beta(\\inv{G}(1-(\\cdot)^{-1})) \\geq -1\/(\\beta(\\overline{F}(x_R-(\\cdot)^{-1}))-1)$ and $\\alpha(\\inv{G}(1-(\\cdot)^{-1})) \\leq -1\/(\\alpha(\\overline{F}(x_R-(\\cdot)^{-1}))-1)$.\n\\end{lemma}\n\\begin{corollary}\\label{cor:FGinvMatuszewska}\nIf $x_R=\\infty$ and $\\beta(\\overline{F})>-\\infty$, then $\\beta(\\overline{F}(\\inv{G}(1-(\\cdot)^{-1}))) \\geq \\frac{-\\beta(\\overline{F})}{\\beta(\\overline{F})+1}$ and $\\alpha(\\overline{F}(\\inv{G}(1-(\\cdot)^{-1}))) \\leq \\frac{-\\a(\\overline{F})}{\\a(\\overline{F})+1}$. Alternatively, if $x_R<\\infty$ and $\\beta(\\overline{F}(x_R-(\\cdot)^{-1}))>-\\infty$, then $\\beta(\\overline{F}(\\inv{G}(1-(\\cdot)^{-1}))) \\geq \\frac{-\\beta(\\overline{F}(x_R-(\\cdot)^{-1}))}{\\beta(\\overline{F}(x_R-(\\cdot)^{-1}))-1}$ and $\\alpha(\\overline{F}(\\inv{G}(1-(\\cdot)^{-1}))) \\leq \\frac{-\\a(\\overline{F}(x_R-(\\cdot)^{-1}))}{\\a(\\overline{F}(x_R-(\\cdot)^{-1}))-1}$.\n\\end{corollary}\nLemma~\\ref{lem:onesidedMatuszewskaproduct} states some closure properties of Matuszewska indices. Lemma~\\ref{lem:onesidedMatuszewskavanish} gives a sufficient condition for $f$ to vanish. Lemmas~\\ref{lem:onesidedMatuszewskaPI} and \\ref{lem:onesidedMatuszewskaPD} state helpful results on the asymptotic behaviour of the ratio between a function and certain integrals over this function, depending on its Matuszewska indices. Lemmas~\\ref{lem:barGMatuszewska} and \\ref{lem:inverseMatuszewska2} and Corollary~\\ref{cor:FGinvMatuszewska} specify and append the earlier lemmas by giving bounds on the Matuszewska indices of $\\inv{G}$ and the composition of $\\overline{F}$ and $\\inv{G}$. The proofs of Lemmas~\\ref{lem:onesidedMatuszewskaproduct}, \\ref{lem:onesidedMatuszewskavanish}, \\ref{lem:barGMatuszewska} and \\ref{lem:inverseMatuszewska2}, along with several additional results, are postponed to Appendix~\\ref{app:preliminaries}. Corollary~\\ref{cor:FGinvMatuszewska} follows immediately from Lemmas~\\ref{lem:onesidedMatuszewskaproduct} and \\ref{lem:inverseMatuszewska2}.\n\n\n\\subsubsection{Finite variance} \\label{subsubsec:finitevariance}\nIn this section, we assume either $x_R=\\infty$ and $-\\infty<\\beta(\\overline{F})\\leq \\a(\\overline{F})<-2$, or $x_R<\\infty$ and $\\beta(\\overline{F}(x_R-(\\cdot)^{-1})) >-\\infty$. If $x_R=\\infty$, then $\\a((\\cdot)^2 \\overline{F}(\\cdot))<0$ and thus $\\mathbb{E}[B^2] = 2\\int_0^\\infty t \\overline{F}(t) \\,\\mathrm{d} t < \\infty$ by Lemma~\\ref{lem:onesidedMatuszewskaPD}(i); if $x_R<\\infty$ then clearly $\\mathbb{E}[B^2]<\\infty$.\n\nNoting that $\\inv{G}$ is a continuous, strictly increasing function, it follows that the function $x_\\r^\\nu:=\\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu}{\\nu}\\right)$ is well-defined for all $\\nu\\in(1-\\r,1)$. For this choice of $x_\\r^\\nu$, we have $\\frac{1-\\r}{1-\\r_{x_\\r^\\nu}}=\\nu$ and $\\frac{\\,\\mathrm{d} G(x_\\r^\\nu)}{\\,\\mathrm{d} \\nu} = \\frac{1-\\r}{\\r}\\frac{1}{\\nu^2}$, and therefore relation \\eqref{eq:Tmain} becomes\n\\begin{align*}\n (1-\\r)^2\\mathbb{E}[T_\\mathrm{FB}] \\hspace{-28pt}&\\hspace{28pt} \n = \\frac{\\mathbb{E}[B](1-\\r)^2}{\\r} \\log\\frac{1}{1-\\r} \n + 2\\r \\int_0^\\infty \\left(\\frac{1-\\r}{1-\\r_x}\\right)^2 x \\overline{F}(x)\\,\\mathrm{d} G(x) \\\\\n &\\hspace{28pt} \\qquad + \\frac{\\r^2}{\\mathbb{E}[B]} \\int_0^\\infty \\left(\\frac{1-\\r}{1-\\r_x}\\right)^3 \\frac{m_2(x) \\overline{F}(x)}{1-\\r} \\,\\mathrm{d} G(x) \\\\\n &= \\frac{\\mathbb{E}[B](1-\\r)^2}{\\r} \\log\\frac{1}{1-\\r} \\\\\n &\\qquad + 2(1-\\r) \\int_{1-\\r}^1 \\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu}{\\nu}\\right) \\overline{F}\\left(\\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu}{\\nu}\\right)\\right) \\,\\mathrm{d} \\nu \\\\\n &\\qquad \\qquad + \\frac{\\r}{\\mathbb{E}[B]} \\int_{1-\\r}^1 \\nu \\, m_2\\left(\\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu}{\\nu}\\right)\\right) \\overline{F}\\left(\\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu}{\\nu}\\right)\\right) \\,\\mathrm{d} \\nu.\n\\end{align*}\nDividing both sides by $\\overline{F}(\\inv{G}(\\r))$ yields\n\\begin{align*}\n \\frac{(1-\\r)^2\\mathbb{E}[T_\\mathrm{FB}]}{\\overline{F}(\\inv{G}(\\r))} &\n = \\frac{\\mathbb{E}[B](1-\\r)^2}{\\r\\overline{F}(\\inv{G}(\\r))} \\log\\frac{1}{1-\\r} \\\\\n &\\qquad + \\frac{2(1-\\r)}{\\overline{F}(\\inv{G}(\\r))} \\int_{1-\\r}^1 \\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu}{\\nu}\\right) \\overline{F}\\left(\\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu}{\\nu}\\right)\\right) \\,\\mathrm{d} \\nu \\\\\n &\\hspace{-4pt} \\qquad \\qquad + \\frac{\\r}{\\mathbb{E}[B]} \\int_{1-\\r}^1 \\nu \\, m_2\\left(\\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu}{\\nu}\\right)\\right) \\frac{\\overline{F}\\left(\\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu}{\\nu}\\right)\\right)}{\\overline{F}(\\inv{G}(\\r))} \\,\\mathrm{d} \\nu \\\\\n &= \\mathrm{I}(\\r) + \\mathrm{II}(\\r) + \\mathrm{III}(\\r) \\stepcounter{equation}\\tag{\\theequation} \\label{eq:meanMatuszewskadecomposed}\n\\end{align*}\nWe will show that $\\mathrm{I}(\\r) + \\mathrm{II}(\\r) = o(1)$ and $\\mathrm{III}(\\r) = \\Theta(1)$. Assume $x_R=\\infty$. Then,\nby Lemma~\\ref{lem:onesidedMatuszewskaproduct} and Corollary~\\ref{cor:FGinvMatuszewska} we find that \n\\begin{align*}\n \\a(\\mathrm{I}(1-(\\cdot)^{-1}) \n &\\leq \\a((\\cdot)^{-2}) + \\a(1\/\\overline{F}(\\inv{G}(1-(\\cdot)^{-1}))) + \\a(\\log (\\cdot)) \\\\\n &= -2 -\\beta(\\overline{F}(\\inv{G}(1-(\\cdot)^{-1}))) + 0 \n \\leq -2 + \\frac{\\beta(\\overline{F})}{\\beta(\\overline{F})+1}\n < 0, \\stepcounter{equation}\\tag{\\theequation} \\label{eq:alphaIinfinite}\n\\end{align*}\nand consequently $\\mathrm{I}(\\r) = o(1)$ as ${\\r\\uparrow 1}$ by Lemma~\\ref{lem:onesidedMatuszewskavanish}.\n\nNext, fix $0\\leq \\varepsilon<2-\\frac{\\beta(\\overline{F})}{\\beta(\\overline{F})+1}$. Substitution of $w=\\frac{\\r}{1-\\r}\\frac{\\nu}{1-\\nu}$ in $\\mathrm{II}(\\r)$ yields\n\\begin{align*}\n \\mathrm{II}(\\r) \n \n &= \\frac{2(1-\\r)}{\\overline{F}(\\inv{G}(\\r))} \\int_1^\\infty \\frac{\\r}{1-\\r}\\left(\\frac{\\r}{1-\\r}+w\\right)^{-2}\\inv{G}(1-w^{-1}) \\overline{F}(\\inv{G}(1-w^{-1})) \\,\\mathrm{d} w \\\\\n &\\leq \\frac{2(1-\\r)^{2-\\varepsilon}}{\\r^{1-\\varepsilon}\\overline{F}(\\inv{G}(\\r))} \\int_1^\\infty w^{-\\varepsilon}\\inv{G}(1-w^{-1}) \\overline{F}(\\inv{G}(1-w^{-1})) \\,\\mathrm{d} w.\n\\end{align*}\nLet $q(w)$ denote the integrand in the last line. A similar analysis to \\eqref{eq:alphaIinfinite} indicates that the term in front of the integral vanishes as ${\\r\\uparrow 1}$, so we only need to show that the integral is bounded. This is implied by Lemma~\\ref{lem:onesidedMatuszewskaPD}(i) after noting that\n\\begin{align*}\n \\a(q)\n &\\leq -\\varepsilon + \\a(\\inv{G}(1-(\\cdot)^{-1})) + \\a(\\overline{F}(\\inv{G}(1-(\\cdot)^{-1}))) \n \\leq -1-\\varepsilon\n < 0,\n\\end{align*}\nwhere the inequalities follow from Lemmas~\\ref{lem:onesidedMatuszewskaproduct} and \\ref{lem:inverseMatuszewska2} and Corollary~\\ref{cor:FGinvMatuszewska}.\n\nLastly, we wish to show that $\\mathrm{III}(\\r)=\\Theta(1)$. Observe that\n\\begin{align*}\n \\mathrm{III}(\\r) \\hspace{-2pt}&\\hspace{2pt}\n \\leq \\l\\mathbb{E}[B^2] \\int_{1-\\r}^{\\frac{1}{1+\\r}} \\nu \\, \\frac{\\overline{F}\\left(\\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu}{\\nu}\\right)\\right)}{\\overline{F}(\\inv{G}(\\r))} \\,\\mathrm{d} \\nu + \\l\\mathbb{E}[B^2] \\int_{\\frac{1}{1+\\r}}^1 \\frac{\\overline{F}\\left(\\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu}{\\nu}\\right)\\right)}{\\overline{F}(\\inv{G}(\\r))} \\,\\mathrm{d} \\nu \\\\\n &\\leq 2\\r\\mathbb{E}[B^*] \\int_1^{\\frac{1}{1-\\r}} \\frac{\\r w}{1-\\r} \\left(\\frac{\\r}{1-\\r}+w\\right)^{-3} \\frac{\\overline{F}(\\inv{G}(1-w^{-1}))}{\\overline{F}(\\inv{G}(\\r))} \\,\\mathrm{d} w + \\mathbb{E}[B^*] \\\\\n &\\leq \\frac{2\\mathbb{E}[B^*]}{\\r} \\int_1^{\\frac{1}{1-\\r}} \\frac{w\\overline{F}(\\inv{G}(1-w^{-1}))}{\\frac{1}{(1-\\r)^2}\\overline{F}(\\inv{G}(\\r))} \\,\\mathrm{d} w + \\mathbb{E}[B^*] \n = \\frac{2\\mathbb{E}[B^*]}{\\r} \\int_1^{\\frac{1}{1-\\r}} \\frac{f(w)\/w}{f(1\/(1-\\r))} \\,\\mathrm{d} w + \\mathbb{E}[B^*],\n\\end{align*}\nwhere $f(w) = w^2\\overline{F}(\\inv{G}(1-w^{-1}))$. Lemma~\\ref{lem:onesidedMatuszewskaproduct} and Corollary~\\ref{cor:FGinvMatuszewska} then state that $\\beta(f) \\geq 2-\\frac{\\beta(\\overline{F})}{\\beta(\\overline{F})+1}\n> 0\n, and therefore Lemma~\\ref{lem:onesidedMatuszewskaPI} implies \n\\[\n \\limsup_{{\\r\\uparrow 1}} \\int_1^{\\frac{1}{1-\\r}} \\frac{f(w)\/w}{f(1\/(1-\\r))} \\,\\mathrm{d} v = \\left[\\liminf_{y\\rightarrow \\infty} \\frac{f(y)}{\\int_1^y f(w)\/w \\,\\mathrm{d} w} \\right]^{-1} < \\infty.\n\\]\nAs such, $\\limsup_{{\\r\\uparrow 1}} \\mathrm{III}(\\r)<\\infty$.\n\nIn order to show $\\liminf_{{\\r\\uparrow 1}} \\mathrm{III}(\\r) >0$, fix $c\\in(0,1)$ and let $\\d_\\r := (1-\\r)\/(c\\r + 1-\\r)$. One may then readily verify that $\\mathrm{III}(\\r) \\geq \\l m_2(\\inv{G}(1-c)) \\int_{\\d_\\r}^{\\frac{1}{1+\\r}} \\nu \\,\\mathrm{d} \\nu \\rightarrow \\frac{m_2(\\inv{G}(1-c))}{8\\mathbb{E}[B]}>0$.\n\nThe $x_R=\\infty$ case is concluded once we prove $\\lim_{{\\r\\uparrow 1}} h^*(\\inv{G}(\\r)) = 0$. To this end, write $h^*(\\inv{G}(\\r))$ as $x \\overline{F}(\\inv{G}(1-x^{-1}))\/\\mathbb{E}[B]$, where $x=(1-\\r)^{-1}$. The claim then follows from Lemma~\\ref{lem:onesidedMatuszewskavanish} after noting that\n\\begin{align*}\n \\a(h^*(\\inv{G}(1-(\\cdot)^{-1}))) \\leq \\a(\\cdot) + \\a(\\overline{F}(\\inv{G}(1-(\\cdot)^{-1}))) \\leq 1 - \\frac{\\a(\\overline{F})}{\\a(\\overline{F}) + 1} = \\frac{1}{\\a(\\overline{F})+1} < 0,\n\\end{align*}\nwhere the inequalities follow from Lemma~\\ref{lem:onesidedMatuszewskaproduct} and Corollary~\\ref{cor:FGinvMatuszewska}.\n\n\nThe $x_R<\\infty$ case can be proven similarly. In that case, one fixes $1<\\varepsilon<2-\\frac{\\beta(\\overline{F}(x_R-(\\cdot)^{-1}))}{\\beta(\\overline{F}(x_R-(\\cdot)^{-1}))-1}$ and obtains $\\a(\\mathrm{I}(1-(\\cdot)^{-1}) \\leq -2 + \\frac{\\beta(\\overline{F}(x_R-(\\cdot)^{-1}))}{\\beta(\\overline{F}(x_R-(\\cdot)^{-1}))-1}<0$, $\\a(q) \\leq -\\varepsilon-\\frac{\\a(\\overline{F}(x_R-(\\cdot)^{-1}))+1}{\\a(\\overline{F}(x_R-(\\cdot)^{-1}))-1} \n\\leq 1-\\varepsilon< 0$ and $\\beta(f) \\geq 2-\\frac{\\beta(\\overline{F}(x_R-(\\cdot)^{-1}))}{\\beta(\\overline{F}(x_R-(\\cdot)^{-1}))-1}> 0$. The claim $h^*(\\inv{G}(\\r)) \\rightarrow \\infty$ follows from Lemma~\\ref{lem:xfx}.\n\n\n\n\\subsubsection{Infinite variance} \\label{subsubsec:infinitevariance}\nAssume $\\beta(\\overline{F}) > -2$\nand recall that $m_2(x) = 2\\mathbb{E}[B] \\int_0^x t \\,\\mathrm{d} G(t) = 2\\mathbb{E}[B] \\left( \\int_0^x \\bar{G}(t)\\,\\mathrm{d} t - x\\bar{G}(x) \\right)$. By Lemmas~\\ref{lem:onesidedMatuszewskaproduct} and \\ref{lem:barGMatuszewska}, one sees that $\\beta((\\cdot)\\bar{G}(\\cdot))>0$ and therefore it follows from Lemma~\\ref{lem:onesidedMatuszewskaPI} that\n\\begin{equation}\n \\limsup_{{x\\rightarrow\\infty}} \\frac{m_2(x)}{2\\mathbb{E}[B]x\\bar{G}(x)} = \\limsup_{{x\\rightarrow\\infty}} \\frac{\\int_0^x \\bar{G}(t)\\,\\mathrm{d} t}{x\\bar{G}(x)} - 1 < \\infty.\n \\label{eq:m2x}\n\\end{equation}\nAlso, since $\\beta((\\cdot)\\overline{F}(\\cdot))>-\\infty$, Lemma~\\ref{lem:onesidedMatuszewskaPD}(ii) indicates that \n\\[ \n \\limsup_{{x\\rightarrow\\infty}} \\frac{x\\overline{F}(x)}{\\bar{G}(x)} = \\limsup_{{x\\rightarrow\\infty}} \\frac{\\mathbb{E}[B]x\\overline{F}(x)}{\\int_x^\\infty \\overline{F}(t) \\,\\mathrm{d} t} < \\infty.\n\\]\nConsequently, it follows from relation \\eqref{eq:Tmain} that, for some $C,D>0$ and all $\\rho$ sufficiently close to one, we have\n\\begin{align*}\n \\mathbb{E}[T_\\mathrm{FB}]\n &\\leq \\frac{\\mathbb{E}[B]}{\\r} \\log\\frac{1}{1-\\r} + 2 \\int_0^\\infty \\frac{x \\overline{F}(x)}{(1-\\r G(x))^2}\\,\\mathrm{d} G(x) + \\frac{1}{\\mathbb{E}[B]} \\int_0^\\infty \\frac{m_2(x)}{x\\bar{G}(x)}\\frac{x \\overline{F}(x)}{(1-\\r G(x))^2} \\,\\mathrm{d} G(x) \\\\\n &\\leq \\frac{\\mathbb{E}[B]}{\\r} \\log\\frac{1}{1-\\r} + C \\int_0^\\infty \\frac{x \\overline{F}(x)}{\\bar{G}(x)} \\frac{1}{1-\\r G(x)} \\,\\mathrm{d} G(x) \n \\leq D \\log\\frac{1}{1-\\r},\n\\end{align*}\nand therefore $\\mathbb{E}[T_\\mathrm{FB}] = \\Theta\\left(\\log\\frac{1}{1-\\r}\\right)$.\n\n\n\n\\subsection{Special cases}\nThis section proves Theorem~\\ref{thm:ETextreme}. The maximum domain of attraction of each of the extreme value distributions are considered in order, followed by a distribution with an atom in its right endpoint. The Fr{\\'e}chet and Weibull cases follow rapidly from Theorem~\\ref{thm:ETMatuszewska} and the Dominated Convergence Theorem. The same approach works for the Gumbel case, although Theorem~\\ref{thm:ETMatuszewska} is not directly applicable. Finally, the atom case follows readily by analysing the sojourn time of maximum-sized jobs.\n\n\n\\subsubsection[Fr\\'echet(a) and Weibull(a)]{Fr{\\'e}chet($\\a$) and Weibull($\\a$)} \\label{subsubsec:ETFrechet}\nTheorem~3.3.7 in \\citet{embrechts1997modelling} states that $F\\in\\mathrm{MDA}(\\Phi_\\a)$ if and only if $\\overline{F}(x)=L(x)x^{-\\a}$ is regularly varying with index $-\\a$. Karamata's theorem \\citep[Theorem~1.5.11]{bingham1989regular} then states that $\\mathbb{E}[B]\\bar{G}(x)\\sim x\\overline{F}(x)\/(\\a-1)$\nis regularly varying with index $-(\\a-1)$.\nConsequently, Theorem~1.5.12 in \\citet{bingham1989regular} states that $\\inv{G}(1-1\/x)$ is regularly varying with index $1\/(\\a-1)$ and therefore $\\overline{F}(\\inv{G}(1-1\/x))$ is regularly varying with index $-\\a\/(\\a-1)$ \\citep[Proposition~1.5.7]{bingham1989regular}.\n\nFirst assume $\\a>2$. We saw in Section~\\ref{subsubsec:finitevariance} that the asymptotic behaviour of $\\mathbb{E}[T_\\mathrm{FB}]$ is identical to the asymptotic behaviour of term $\\mathrm{III}(\\r)$ (cf.\\ relation~\\eqref{eq:meanMatuszewskadecomposed}). Now, the Uniform Convergence Theorem \\citep[Theorem~1.5.2]{bingham1989regular} states that $\\frac{\\overline{F}(\\inv{G}(1-1\/x))}{\\overline{F}(\\inv{G}(1-1\/y))} \\rightarrow \\left(\\frac{y}{x}\\right)^{\\a\/(\\a-1)}$ uniformly for all $00,$ if and only if $x_R<\\infty$ and $\\overline{F}(x_R-x^{-1})=L(x)x^{-\\a}$ is regularly varying with index $-\\a$. The corresponding result then follows after noting that $\\mathbb{E}[B]\\bar{G}(x_R-x^{-1})\\sim L(x)x^{-\\a-1}\/(\\a+1)$ is regularly varying with index $-(\\a+1)$ and $\\frac{\\overline{F}(\\inv{G}(1-1\/x))}{\\overline{F}(\\inv{G}(1-1\/y))} \\rightarrow \\left(\\frac{y}{x}\\right)^{\\a\/(\\a+1)}$ uniformly for all $00$ and $q>0$ such that\n\\begin{equation}\n f(x-y) \\leq p + q y\n \\label{eq:linearapprox}\n\\end{equation}\nfor all $y\\in(0,z]$ that satisfy $x-y\\in D$.\n\\end{lemma}\n\nLet $q>0$ and $\\d^*>0$ be such that $\\overline{F}(x_R-\\d) \\leq p + q\\d$ for all $\\d\\in(0,\\d^*]$. It follows that $\\mathbb{E}[B]\\bar{G}(x) = \\int_x^{x_R} \\overline{F}(y) \\,\\mathrm{d} y \\sim p(x_R-x)$ as $x\\uparrow x_R$, and hence $x_R-\\inv{G}(u) \\sim \\mathbb{E}[B](1-u)\/p$ as $u\\uparrow 1$. Fix $\\varepsilon>0$ and let $u^*\\in(0,1)$ be such that $x_R - \\inv{G}(u)\\leq (1+\\varepsilon)\\mathbb{E}[B](1-u)\/p$ for all $u\\in(u^*,1)$. Now, for all $u>\\r_0:=\\max\\{u^*, 1-p\\d^*\/((1+\\varepsilon)\\mathbb{E}[B])\\}$ we have\n\\begin{equation}\n p \\leq \\overline{F}(\\inv{G}(u)) \\leq p + \\frac{q}{p}(1+\\varepsilon)\\mathbb{E}[B](1-u) =: p + p\\tilde{q}(1-u)\n \\label{eq:FbarGinvAtom}\n\\end{equation}\nand hence, for $\\tilde{q}=q(1+\\varepsilon)\\mathbb{E}[B]\/p^2$, the relations\n\\begin{equation*}\n \\frac{1}{1+\\tilde{q}(1-\\r)} \n \\leq \\frac{\\overline{F}\\left(\\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu}{\\nu}\\right)\\right)}{\\overline{F}(\\inv{G}(\\r))} \n \\leq 1 + \\tilde{q}\\,\\frac{1-\\r}{\\r}\\frac{1-\\nu}{\\nu} \n \\leq 1 + \\tilde{q}\\,\\frac{1-\\r}{\\r}\\frac{1}{\\nu}\n\\end{equation*}\nhold for all $\\nu>\\frac{1-\\r}{1-\\r \\cdot \\r_0}, \\r>\\r_0$.\n\nConsider term $\\mathrm{III}(\\r)$. On the one hand, we find\n\\begin{align*}\n \\limsup_{{\\r\\uparrow 1}} \\mathrm{III}(\\r)\n &\\leq \\limsup_{{\\r\\uparrow 1}} \\frac{\\mathbb{E}[B^2]}{\\mathbb{E}[B]} \\int_{1-\\r}^1 \\nu \\, \\frac{\\overline{F}\\left(\\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu}{\\nu}\\right)\\right)}{\\overline{F}(\\inv{G}(\\r))} \\,\\mathrm{d} \\nu \\\\\n &\\leq \\limsup_{{\\r\\uparrow 1}} \\frac{\\mathbb{E}[B^2]}{\\mathbb{E}[B]} \\int_{1-\\r}^{\\frac{1-\\r}{1-\\r\\cdot \\r_0}} \\frac{1}{p} \\,\\mathrm{d} \\nu + \\frac{\\mathbb{E}[B^2]}{\\mathbb{E}[B]} \\int_{\\frac{1-\\r}{1-\\r\\cdot\\r_0}}^1 \\left\\{\\nu + \\tilde{q}\\,\\frac{1-\\r}{\\r}\\right\\} \\,\\mathrm{d} \\nu \\\\\n &\\leq \\limsup_{{\\r\\uparrow 1}} \\frac{\\mathbb{E}[B^2]}{p\\mathbb{E}[B]} \\frac{1-\\r}{1-\\r\\cdot \\r_0} + \\frac{\\mathbb{E}[B^2]}{2\\mathbb{E}[B]} + \\frac{\\mathbb{E}[B^2]}{\\mathbb{E}[B]} \\,\\tilde{q}\\, \\frac{1-\\r}{\\r}\n = \\mathbb{E}[B^*].\n\\end{align*}\nOn the other hand, we have\n\\begin{align*}\n \\liminf_{{\\r\\uparrow 1}} \\mathrm{III}(\\r)\n &\\geq \\liminf_{{\\r\\uparrow 1}} \\frac{\\r m_2(\\inv{G}(\\r_0))}{\\mathbb{E}[B]} \\int_{\\frac{1-\\r}{1-\\r\\cdot\\r_0}}^1 \\frac{\\nu}{1+\\tilde{q}(1-\\r)} \\,\\mathrm{d} \\nu \\\\\n &= \\liminf_{{\\r\\uparrow 1}} \\frac{\\r m_2(\\inv{G}(\\r_0))}{2\\mathbb{E}[B]} \\frac{1}{1+\\tilde{q}(1-\\r)} \\left(1-\\left(\\frac{1-\\r}{1-\\r\\cdot\\r_0}\\right)^2\\right)\n = \\frac{m_2(\\inv{G}(\\r_0))}{\\mathbb{E}[B^2]}\\cdot \\mathbb{E}[B^*].\n\\end{align*}\nSince $\\r_0$ may be chosen arbitrarily close to unity, we find $\\mathbb{E}[T_\\mathrm{FB}] \\sim \\frac{\\mathbb{E}[B^*]\\overline{F}(\\inv{G}(\\r))}{(1-\\r)^2} \\sim \\frac{p\\mathbb{E}[B^*]}{(1-\\r)^2}$ as ${\\r\\uparrow 1}$. Here, the last equivalence follows from \\eqref{eq:FbarGinvAtom}.\nThe section is concluded with the proof of Lemma~\\ref{lem:linearapprox}.\n\\begin{proof}[Proof of Lemma~\\ref{lem:linearapprox}]\nWithout loss of generality, we assume that $(x-1,x)\\subset D$. For sake of finding a contradiction, assume that the above statement is not true, i.e.\\ for all $z>0$ and all $q>0$ there exists $\\xi\\in(0,z]$ such that\n\\begin{equation}\n f(x-\\xi) > p + q \\xi.\n \\label{eq:flarger}\n\\end{equation}\nDefine $z_1 := 1, q_1 := 1$ and let $\\xi_1\\in(0,1]$ be such that~\\eqref{eq:flarger} holds with $q=q_1$ and $\\xi=\\xi_1$. By definition of the left limit, for any $\\varepsilon>0$ there exists $\\eta^*>0$ such that $f(x-\\eta) \\leq p+\\varepsilon$ for all $\\eta\\in(0,\\eta^*]$. In particular, by choosing $\\varepsilon = q_1 \\xi_1$ we obtain $\\eta^*=:\\eta_2^*<\\xi_1\\leq z_1$ such that $f(x-\\eta) \\leq p+q_1 \\xi_1$ for all $\\eta \\in(0,\\eta_2^*]$.\n\nDefine $z_2 := \\min\\{\\eta_2^*, 1\/2\\}$ and set $q_2:= 1\/z_2$. Again, there exists $\\xi_2\\in(0,z_2]$ such that~\\eqref{eq:flarger} holds for $q=q_2$ and $\\xi=\\xi_2$. By repeating the above procedure we obtain three sequences $(q_n)_{n\\in\\mathbb{N}}, (z_n)_{n\\in\\mathbb{N}}$ and $(\\xi_n)_{n\\in\\mathbb{N}}$ such that $q_n = 1\/z_n$, $0< z_{n+1} < \\xi_n < z_n \\leq 1\/n$ and\n\\begin{equation}\n f(x-\\xi_n) > p + q_n \\xi_n\n \\label{eq:flargergeneral}\n\\end{equation}\nfor all $n\\in\\mathbb{N}$. From these properties, one may additionally deduce that $\\xi_n > 1\/q_{n+1}, \\xi_n\\downarrow 0$ and $q_n\\rightarrow\\infty$. \n\nWe will obtain a contradiction by showing that $(q_n)_{n\\in\\mathbb{N}}$ also converges as $n\\rightarrow\\infty$. Assume that $\\limsup_{n\\rightarrow\\infty} q_n\\xi_n > 0$. Then, relation~\\eqref{eq:flargergeneral} implies $\\limsup_{n\\rightarrow\\infty} f(x-\\xi_n) \\geq \\limsup_{n\\rightarrow\\infty} p + q_n \\xi_n > p$ which contradicts the lemma assumptions. Therefore, $\\limsup_{n\\rightarrow\\infty} q_n\\xi_n$ must equal zero. It then follows that $0\\leq \\limsup_{n\\rightarrow\\infty} q_n\/q_{n+1} \\leq \\limsup_{n\\rightarrow\\infty} q_n\\xi_n = 0$ and as such the ratio test states that the sequence $(q_n)_{n\\in\\mathbb{N}}$ converges.\n\\end{proof}\nNote that Lemma~\\ref{lem:linearapprox} can be applied generally to yield lower and upper bounds for $f(y)$ around any point $x\\in \\bar{D}$ for which either $\\lim_{y\\uparrow x} f(y)$ or $\\lim_{y\\downarrow x} f(y)$ exists.\n\n\n\n\n\n\n\n\n\\section[Asymptotic behaviour of h*(Ginv(rho)) if F in MDA(Lambda)]{Asymptotic behaviour of $h^*(\\inv{G}(\\r))$ if $F\\in\\mathrm{MDA}(\\Lambda)$}\n\\label{sec:TmeanGumbel}\nThis section is dedicated to the proof of Theorem~\\ref{thm:TmeanGumbel}. Theorem~\\ref{thm:MDAGumbel} states that $c_n$ may be chosen as $1\/h^*(\\Finv(1-n^{-1}))$, so that the theorem follows from Theorem~\\ref{thm:ETextreme} after analysing the limit $\\lim_{n\\rightarrow \\infty} h^*(\\inv{G}(1-n^{-1}))\/h^*(\\Finv(1-n^{-1})) = \\lim_{y\\uparrow 1} (1-y)^{-2} \\overline{F}(\\inv{G}(y))\\bar{G}(\\Finv(y))$. The proof heavily relies upon the work by \\citet{dehaan1974equivalence} and \\citet{resnick1987extreme}, who both consider $\\Gamma$- and $\\Pi$-varying functions:\n\\begin{definition} \\label{def:Gammavarying}\nA function $U:(x_L,x_R)\\rightarrow \\mathbb{R}, \\lim_{x\\uparrow x_R} U(x) = \\infty$ is in the class of $\\Gamma$-varying functions if it is non-decreasing, and there exists a function $f:(x_L,x_R)\\rightarrow\\mathbb{R}_{\\geq 0}$ satisfying\n\\begin{equation}\n \\lim_{x\\uparrow x_R} \\frac{U(x+tf(x))}{U(x)} = e^t\n\\end{equation}\nfor all $t\\in\\mathbb{R}$. The function $f(\\cdot)$ is called an \\textit{auxiliary function} and is unique up to asymptotic equivalence.\n\\end{definition}\n\\begin{definition} \\label{def:Pivarying}\nA function $V:(x_L,\\infty)\\rightarrow \\mathbb{R}_{\\geq 0}$ is in the class of $\\Pi$-varying functions if it is non-decreasing, and there exist functions $a(x)>0, b(x)\\in\\mathbb{R}$, such that\n\\begin{equation}\n \\lim_{{x\\rightarrow\\infty}} \\frac{V(tx)-b(x)}{a(x)} = \\log t\n\\end{equation}\nfor all $t\\in\\mathbb{R}$. The function $a(\\cdot)$ is called an \\textit{auxiliary function} and is unique up to asymptotic equivalence.\n\\end{definition}\n\nIt turns out that $\\Gamma$- and $\\Pi$-varying functions are closely related to $\\mathrm{MDA}(\\Lambda)$. In particular, if $F\\in\\mathrm{MDA}(\\Lambda)$ with auxiliary function $1\/h^*$, then Proposition~1.9 in \\citet{resnick1987extreme} states that $U_F :=1\/\\overline{F}\\in\\Gamma$ with auxiliary function $f_F := 1\/h^*$. Proposition~0.9(a) then states that $V_F(\\cdot):= \\inv{U}_F(\\cdot) = \\inv{\\left(\\frac{1}{\\overline{F}}\\right)}(\\cdot) = \\Finv\\left(1-(\\cdot)^{-1}\\right)\\in\\Pi$ with auxiliary function $a_F(\\cdot):=f_F(\\inv{U}_F(\\cdot)) = 1\/h^*(\\Finv(1-(\\cdot)^{-1}))$. Similarly, using Lemma~\\ref{lem:FandGareGumbel}, we find that $U_G:=1\/\\bar{G}\\in\\Gamma$ and $V_G(\\cdot):=\\inv{U}_G(\\cdot) = \\inv{G}\\left(1-(\\cdot)^{-1}\\right)\\in\\Pi$ with auxiliary function $a_G(\\cdot) := 1\/h^*(\\inv{G}(1-(\\cdot)^{-1}))$.\n\nNow, since Theorem~\\ref{thm:MDAGumbel} states that the norming constants $c_n$ may be chosen as $1\/h^*(\\Finv(1-1\/n))$, we are done once we show that $\\lim_{n \\rightarrow \\infty} c_n h^*(\\inv{G}(1-1\/n)) = \\lim_{{x\\rightarrow\\infty}} \\frac{a_F(x)}{a_G(x)}$ tends to the right quantity for all cases in the theorem.\n\nCorollary~3.4 in \\citet{dehaan1974equivalence} states that\\footnote{Here, we denote $0^{-1}=+\\infty$.} $\\lim_{x \\uparrow x_R} \\frac{a_F(x)}{a_G(x)} = \\xi^{-1} \\in [0,\\infty]$ if and only if there exist a positive function $b(x)$ with $\\lim_{x \\uparrow x_R}b(x) = \\xi$ and constants $b_2>0$ and $b_3\\in\\mathbb{R}$ such that\\footnote{Their paper only considers the $x_R=\\infty$ case; however, the proof also holds for finite $x_R$.} $P(x) = b_3 + \\int_0^x b(t) \\,\\mathrm{d} t$ and $\\inv{V_F}(x) \\sim b_2 \\inv{V_G}(P(x))$ as $x \\uparrow x_R$. As $\\inv{V_\\bullet}(x) = \\inv{(\\inv{U_\\bullet})}(x) \\sim U_\\bullet(x)$ \\citep[p.44]{resnick1987extreme}, this is equivalent to finding a function $P(x)$, of the given form, that satisfies\n\\begin{equation}\n \\lim_{x\\uparrow x_R} \\frac{\\bar{G}(P(x))}{b_2 \\overline{F}(x)} = \\lim_{x\\uparrow x_R} \\frac{U_F(x)}{b_2 U_G(P(x))} = \\lim_{x\\uparrow x_R} \\frac{\\inv{V}_F(x)}{b_2 \\inv{V}_G(P(x))} = 1.\n \\label{eq:P2}\n\\end{equation}\nWe use the following lemma, proven at the end of this section, to construct a suitable $P(x)$: \n\\begin{lemma} \\label{lem:strictlyincreasingF}\nLet $F$ be a c.d.f. Then, there exists a strictly increasing, continuous c.d.f.\\ $F_{\\uparrow}(x)$ satisfying both $\\overline{F}_{\\uparrow}(x)\\sim \\overline{F}(x)$ and $\\bar{G}(F_{\\uparrow}(x))\\sim \\bar{G}(F(x))$ as $x\\uparrow x_R$.\n\\end{lemma}\n\nAs $\\inv{G}(F_{\\uparrow}(x))$ is strictly increasing, there exists a positive function $b(\\cdot)$ such that $\\int_0^x b(t) \\,\\mathrm{d} t=\\inv{G}(F_{\\uparrow}(x))$. Therefore, we see that \\eqref{eq:P2} is satisfied with $b_2=1$ and $b_3=0$. The result follows once we show that\n\\begin{equation}\n \\lim_{{x\\rightarrow\\infty}} b(x) = \\lim_{{x\\rightarrow\\infty}} \\frac{P(x)}{x} = \\lim_{{x\\rightarrow\\infty}} \\frac{\\inv{G}(F(x))}{x} = \\xi\n \\label{eq:Pinfinite}\n\\end{equation}\nif $x_R=\\infty$, and once we show that\n\\begin{equation}\n \\lim_{x \\uparrow x_R} b(x) = \\lim_{x\\uparrow x_R} \\frac{P(x_R)-P(x)}{x_R-x} = \\lim_{x\\uparrow x_R} \\frac{x_R - \\inv{G}(F(x))}{x_R-x} = \\xi.\n \\label{eq:Pfinite}\n\\end{equation}\nif $x_R<\\infty$.\n\nThe right-hand sides of both \\eqref{eq:Pinfinite} and \\eqref{eq:Pfinite} depend on the function $\\inv{G}(F(x))$. The advantage of this representation is apparent from the following key relation, which connects $\\inv{G}(F(x))$ to $h^*(x)$:\n\\begin{equation}\n \\mathbb{E}[B] h^*(x) = \\exp\\left[\\int_{\\inv{G}(F(x))}^x h^*(t) \\,\\mathrm{d} t\\right].\n \\label{eq:GinvF}\n\\end{equation}\nRelation~\\eqref{eq:GinvF} follows readily from $h^*(x) = -\\frac{\\,\\mathrm{d}}{\\,\\mathrm{d} x} \\log \\bar{G}(x)$. In the upcoming analysis, we first focus on \\eqref{eq:Pinfinite} and then consider \\eqref{eq:Pfinite}.\n\n\n\\subsection{Infinite support}\nFirst assume $x_R=\\infty$. The following theorem relates the assumptions on $\\overline{F}(x)$ to properties of $h^*(x)$:\n\\begin{theorem}[\\citet{beirland1995}, Theorem~2.1] \\label{thm:beirland}\n\\leavevmode\n\\begin{enumerate}[(i)]\n\\item If there exists $\\a>0$ and a slowly varying function $l(x)$ such that $-\\log \\overline{F}(x)\\sim l(x) x^\\a$ as $x\\rightarrow\\infty$, then $h^*(x)\\sim \\a l(x) x^{\\a-1}$ as $x\\rightarrow\\infty$ if and only if \n \\begin{equation}\n \\lim_{\\l \\downarrow 1} \\liminf_{x\\rightarrow\\infty} \\inf_{t\\in[1,\\l]} \\{\\log h^*(tx) - \\log h^*(x)\\} \\geq 0.\n \\end{equation}\n\\item If there exists a function $l(x):[0,\\infty)\\rightarrow \\mathbb{R}, \\liminf_{x\\rightarrow \\infty} l(x) >1$ such that for all $\\l>0$\n \\begin{equation}\n \\lim_{x\\rightarrow\\infty} \\frac{-\\log\\overline{F}(\\l x)+\\log\\overline{F}(x)}{l(x)} = \\log(\\l),\n \\end{equation}\n then $l(x)$ is slowly varying and $h^*(x)\\sim (l(x)-1)\/x$ as $x\\rightarrow\\infty$.\n\\end{enumerate}\n\\end{theorem}\n\nThe cases in Theorem~\\ref{thm:TmeanGumbel} correspond to the cases in Theorem~\\ref{thm:beirland}. We will consider the implications of Theorem~\\ref{thm:beirland} as to derive the results presented in Theorem~\\ref{thm:TmeanGumbel}.\n\\begin{enumerate}[(i)]\n\\item\nAssume $h^*(x) \\sim \\a l(x) x^{\\a-1},\\a>0,$ and note that\n\\begin{align*}\n \\lim_{x\\rightarrow \\infty} \\frac{-\\log(\\mathbb{E}[B]h^*(x))}{x h^*(x)} \n &= \\lim_{x\\rightarrow \\infty} \\frac{-\\log(\\mathbb{E}[B]\\a l(x)) - (\\a-1)\\log(x)}{\\a l(x)x^\\a} \n = 0.\n\\end{align*}\nWe will prove the relation $\\lim_{x\\rightarrow \\infty} \\inv{G}(F(x))\/x = 1$ by contradiction. Specifically, if $\\limsup_{x\\rightarrow \\infty} \\inv{G}(F(x))\/x >1$ then there exists $\\varepsilon>0$ and a sequence $(x_n)_{n\\in\\mathbb{N}}, x_n\\rightarrow\\infty$, such that $\\inv{G}(F(x_n))\/x_n \\geq 1+\\varepsilon$ for all $n\\in\\mathbb{N}$. The Uniform Convergence Theorem \\citep[Theorems~1.2.1 and 1.5.2]{bingham1989regular} then implies\n\\begin{align*}\n \\frac{-\\log(\\mathbb{E}[B]h^*(x_n))}{x_n h^*(x_n)}\n &= \\int_{x_n}^{\\inv{G}(F(x_n))} \\frac{h^*(t)}{x_n h^*(x_n)} \\,\\mathrm{d} t\n = \\int_1^{\\inv{G}(F(x_n))\/x_n} \\frac{h^*(\\tau x_n)}{h^*(x_n)} \\,\\mathrm{d} \\tau \\\\\n &\\geq \\int_1^{1+\\varepsilon} \\frac{h^*(\\tau x_n)}{h^*(x_n)} \\,\\mathrm{d} \\tau\n \\sim \\int_1^{1+\\varepsilon} \\tau^{\\a-1} \\,\\mathrm{d} \\tau \n = \\a^{-1} ((1+\\varepsilon)^\\a-1)\n\\end{align*}\nfor every $n\\in\\mathbb{N}$. However, this contradicts with $\\lim_{x\\rightarrow \\infty} \\frac{\\log(\\mathbb{E}[B]h^*(x))}{x h^*(x)} = 0$ and it follows that $\\liminf_{x\\rightarrow \\infty} \\inv{G}(F(x))\/x \\leq 1$. Similarly, one may show that $\\liminf_{x\\rightarrow\\infty} \\inv{G}(F(x))\/x \\geq 1$ and therefore $\\lim_{x\\rightarrow \\infty} \\inv{G}(F(x))\/x = 1$ as claimed.\n\n\\item\nAlternatively, assume $h^*(x)\\sim (l(x)-1)\/x$ and denote $L=\\lim_{x\\rightarrow\\infty} \\log(x)\/l(x) \\in [0,\\infty]$. Then Lemma~\\ref{lem:xfx} states that $l(x)\\rightarrow \\infty$ and as such\n\\begin{align*}\n \\lim_{x\\rightarrow \\infty} \\frac{-\\log(\\mathbb{E}[B]h^*(x))}{x h^*(x)} \n &= \\lim_{x\\rightarrow \\infty} \\frac{-\\log(\\mathbb{E}[B]) - \\log(l(x)-1) + \\log(x)}{l(x)-1} \n = L. \\stepcounter{equation}\\tag{\\theequation} \\label{eq:ratiotoL}\n\\end{align*}\nNow, if $L=0$ then the analysis in (i) yields $\\lim_{x\\rightarrow \\infty} \\inv{G}(F(x))\/x = 1 = e^0$. If $L\\in(0,\\infty)$ then~\\eqref{eq:GinvF} and \\eqref{eq:ratiotoL} imply\n\\begin{align*}\n L &= \\lim_{x\\rightarrow \\infty} \\frac{-\\log(\\mathbb{E}[B]h^*(x))}{x h^*(x)} \n = \\lim_{x\\rightarrow \\infty} \\int_x^{\\inv{G}(F(x))} \\frac{h^*(t)}{xh^*(x)} \\,\\mathrm{d} t \\\\\n &= \\lim_{x\\rightarrow \\infty} \\frac{1}{\\log(x)} \\int_x^{\\inv{G}(F(x))} \\frac{l(t)-1}{\\log t} \\cdot \\frac{\\log(x)}{l(x)-1} \\cdot \\frac{\\log(t)}{t} \\,\\mathrm{d} t \\\\\n &= \\lim_{x\\rightarrow \\infty} \\frac{1}{\\log(x)} \\int_x^{\\inv{G}(F(x))} \\frac{\\log(t)}{t} \\,\\mathrm{d} t \n = \\lim_{x\\rightarrow \\infty} \\frac{\\log^2(\\inv{G}(F(x)))-\\log^2(x)}{2\\log(x)}.\n\\end{align*}\nWriting $\\inv{G}(F(x)) = u(x)x, u(x)x \\rightarrow \\infty,$ now yields\n\\begin{align*}\n L &= \\lim_{x\\rightarrow \\infty} \\log(u(x))\\left(1+\\frac{\\log(u(x))}{2\\log(x)}\\right),\n\\end{align*}\nfrom which we conclude $u(x)\\rightarrow e^L$ and consequently $\\lim_{x\\rightarrow\\infty} \\inv{G}(F(x))\/x = e^L$. \n\nFinally, if $L=\\infty$ then $h^*(x)\\downarrow 0$ and therefore $\\inv{G}(F(x))\\geq x$ by \\eqref{eq:GinvF}. For sake of contradiction, assume $\\liminf_{x\\rightarrow\\infty} \\inv{G}(F(x))\/x < \\infty$. Then there exists $M_0\\geq 1$ such that for all $M\\geq M_0$ there exists a sequence $(x_n)_{n\\in\\mathbb{N}}, x_n\\rightarrow\\infty,$ such that $\\inv{G}(F(x_n))\/x_n \\leq M$ for every $n\\in\\mathbb{N}$. A similar analysis as in (i) then shows that this contradicts relation \\eqref{eq:ratiotoL}, and therefore $\\lim_{x\\rightarrow\\infty} \\inv{G}(F(x))\/x = \\infty$.\n\\end{enumerate}\n\n\n\\subsection{Finite support}\nNow assume $x_R<\\infty$. Theorem~\\ref{thm:MDAGumbel} states that $\\overline{F}(x)$ can be represented as \\[\\overline{F}(x) = c(x) \\exp\\left\\{-\\int_z^x g(t) h^*(t) \\,\\mathrm{d} t\\right\\}, \\quad z0, g(t)\\rightarrow 1$ as $x\\uparrow x_R$, and the auxiliary function $f_F(\\cdot)=1\/h^*(\\cdot)$ is positive, absolutely continuous and has density $f_F'(x)$ satisfying $\\lim_{x\\uparrow x_R} f_F'(x) = 0$. It is easily verified that the function $\\overline{F}_\\infty(x):= \\overline{F}(x_R-x^{-1}), x\\geq (x_R-z)^{-1},$ is also in $\\mathrm{MDA}(\\Lambda)$ with auxiliary function $f_\\infty(x):= x^2\/h^*(x_R-x^{-1})$. \nFrom this representation it is easy to obtain a finite-support equivalent of Theorem~\\ref{thm:beirland}:\n\\begin{corollary}\\label{cor:beirland}\nAssume $x_R<\\infty$.\n\\begin{enumerate}[(i)]\n\\item If there exists $\\a>0$ and a slowly varying function $l(x)$ such that $-\\log \\overline{F}(x_R-x^{-1})\\sim l(x) x^\\a$ as $x\\rightarrow\\infty$, then $h^*(x_R-x^{-1})\\sim \\a l(x) x^{\\a+1}$ as $x\\rightarrow\\infty$ if and only if \n \\begin{equation}\n \\lim_{\\l \\downarrow 1} \\liminf_{x\\rightarrow\\infty} \\inf_{t\\in[1,\\l]} \\{\\log h^*(x_R-(tx)^{-1}) - \\log h^*(x_R-x^{-1}) - 2\\log(t)\\} \\geq 0.\n \\end{equation}\n\\item If there exists a function $l(x):[0,\\infty)\\rightarrow \\mathbb{R}, \\liminf_{x\\rightarrow \\infty} l(x) >1$ such that for all $\\l>0$\n \\begin{equation}\n \\lim_{x\\rightarrow\\infty} \\frac{-\\log\\overline{F}(x_R-(\\l x)^{-1})+\\log\\overline{F}(x_R-x^{-1})}{l(x)} = \\log(\\l),\n \\end{equation}\n then $l(x)$ is slowly varying and $h^*(x_R-x^{-1})\\sim (l(x)-1)x$ as $x\\rightarrow\\infty$.\n\\end{enumerate}\n\\end{corollary}\n\n\nAgain, the cases in Theorem~\\ref{thm:TmeanGumbel} correspond to the cases in Corollary~\\ref{cor:beirland}. The proof for the finite support case is similar to the infinite support case, yet we state it for completeness. Note that $h^*(x) \\rightarrow \\infty$ as $x\\uparrow x_R$ in both cases, and therefore $\\frac{x_R-\\inv{G}(F(x))}{x_R-x} \\geq 1$ for all $x$ sufficiently close to $x_R$ by \\eqref{eq:GinvF}.\n\\begin{enumerate}[(i)]\n\\item\nAssume $h^*(x_R-x^{-1}) \\sim \\a l(x) x^{\\a+1},\\a>0,$ and note that\n\\begin{align*}\n \\lim_{x\\uparrow x_R} \\frac{-\\log(\\mathbb{E}[B]h^*(x))}{(x_R-x) h^*(x)} \n &= \\lim_{y\\rightarrow \\infty} \\frac{-\\log(\\mathbb{E}[B]h^*(x_R-y^{-1}))}{h^*(x_R-y^{-1})\/y} \\\\\n &= \\lim_{y\\rightarrow \\infty} \\frac{-\\log(\\mathbb{E}[B]\\a l(y)) - (\\a+1)\\log(y)}{\\a l(y)y^{\\a}} \n = 0.\n\\end{align*}\nWe will show that $\\lim_{x\\rightarrow \\infty} \\frac{x_R-\\inv{G}(F(x))}{x_R-x} = 1$ by contradiction. By our previous remark, we only need show $\\limsup_{x\\rightarrow \\infty} \\frac{x_R-\\inv{G}(F(x))}{x_R-x} \\leq 1$. If this is false, then there exists $\\varepsilon\\in(0,1)$ and a sequence $(x_n)_{n\\in\\mathbb{N}}, x_n\\uparrow x_R$, such that $\\frac{x_R-x_n}{x_R-\\inv{G}(F(x_n))} \\leq 1-\\varepsilon$ for all $n\\in\\mathbb{N}$. As before, the Uniform Convergence Theorem \\citep[Theorems~1.2.1 and 1.5.2]{bingham1989regular} then implies\n\\begin{align*}\n -\\frac{\\log(\\mathbb{E}[B]h^*(x_n))}{(x_R-x_n) h^*(x_n)}\n &= \\int_{x_n}^{\\inv{G}(F(x_n))} \\frac{h^*(t)}{(x_R-x_n) h^*(x_n)} \\,\\mathrm{d} t \\\\\n &= \\int_{\\frac{x_R-x_n}{x_R-\\inv{G}(F(x_n))}}^1 \\frac{h^*(x_R-(x_R-x_n)\\tau^{-1})}{\\tau^2 h^*(x_R-(x_R-x_n))} \\,\\mathrm{d} \\tau \\\\\n &\\geq \\int_{1-\\varepsilon}^1 \\frac{h^*(x_R-(x_R-x_n)\\tau^{-1})}{\\tau^2 h^*(x_R-(x_R-x_n))} \\,\\mathrm{d} \\tau \n \\sim \\int_{1-\\varepsilon}^1 \\tau^{\\a-1} \\,\\mathrm{d} \\tau \n = \\a^{-1} (1-(1-\\varepsilon)^\\a)\n\\end{align*}\nfor every $n\\in\\mathbb{N}$, which contradicts with $\\lim_{x\\uparrow x_R} \\frac{\\log(\\mathbb{E}[B]h^*(x))}{(x_R-x) h^*(x)} = 0$.\n\n\\item\nNow, assume $h^*(x_R-x^{-1})\\sim (l(x)-1)x$ and let $L=\\lim_{x\\rightarrow\\infty} \\log(x)\/l(x) \\in [0,\\infty]$. Lemma~\\ref{lem:xfx} implies $l(x)\\rightarrow \\infty$, so that\n\\begin{align*}\n \\lim_{x\\uparrow x_R} \\frac{-\\log(\\mathbb{E}[B]h^*(x))}{(x_R-x) h^*(x)} \n &= \\lim_{y\\rightarrow \\infty} \\frac{-\\log(\\mathbb{E}[B](l(y)-1))-\\log(y)}{l(y)-1} \n = -L. \\stepcounter{equation}\\tag{\\theequation} \\label{eq:ratiotoL2}\n\\end{align*}\nIf $L=0$, then $\\lim_{x\\rightarrow \\infty} \\frac{x_R-\\inv{G}(F(x))}{x_R-x} = 1 = e^0$ by the analysis in (i). Alternatively, if $L\\in(0,\\infty)$ then~\\eqref{eq:GinvF} and \\eqref{eq:ratiotoL2} imply\n\\begin{align*}\n L &= \\lim_{x\\uparrow x_R} \\frac{\\log(\\mathbb{E}[B]h^*(x))}{(x_R-x) h^*(x)} \n = \\lim_{x\\uparrow x_R} \\int_{\\inv{G}(F(x))}^x \\frac{h^*(t)}{(x_R-x)h^*(x)} \\,\\mathrm{d} t \\\\\n &= \\lim_{x\\uparrow x_R} \\int_{\\frac{1}{x_R-\\inv{G}(F(x))}}^{\\frac{1}{x_R-x}} \\frac{h^*(x_R-\\tau^{-1})}{(x_R-x)\\tau^2 h^*(x_R-(x_R-x))} \\,\\mathrm{d} \\tau \\\\\n &= \\lim_{x\\uparrow x_R} \\frac{1}{\\log((x_R-x)^{-1})} \\int_{\\frac{1}{x_R-\\inv{G}(F(x))}}^{\\frac{1}{x_R-x}} \\frac{l(\\tau)-1}{\\log(\\tau)} \\cdot \\frac{\\log((x_R-x)^{-1})}{l((x_R-x)^{-1})-1} \\cdot \\frac{\\log(\\tau)}{\\tau}\\,\\mathrm{d} \\tau \\\\\n &= \\lim_{x\\uparrow x_R} \\frac{1}{\\log(x_R-x)} \\int_{\\frac{1}{x_R-x}}^{\\frac{1}{x_R-\\inv{G}(F(x))}} \\frac{\\log(\\tau)}{\\tau}\\,\\mathrm{d} \\tau \\\\\n &= \\lim_{x\\uparrow x_R} \\frac{\\log^2(x_R-\\inv{G}(F(x)))-\\log^2(x_R-x)}{2\\log(x_R-x)}\n\\end{align*}\nWrite $\\inv{G}(F(x)) = x_R-(x_R-x)u(x)$ where $(x_R-x)u(x) \\rightarrow 0$ for all $x$ sufficiently close to $x_R$. One then obtains\n\\begin{align*}\n L &= \\lim_{x\\uparrow x_R} \\log(u(x))\\left(1+\\frac{\\log(u(x))}{2\\log(x_R-x)}\\right),\n\\end{align*}\nimplying $u(x)\\rightarrow e^L$ and subsequently $\\lim_{x\\rightarrow\\infty} \\frac{x_R-\\inv{G}(F(x))}{x_R-x} = e^L$. \n\nLastly, consider $L=\\infty$ and assume $\\limsup_{x\\rightarrow\\infty} \\frac{x_R-\\inv{G}(F(x))}{x_R-x} < \\infty$ for sake of contradiction. Then there exists $M_0\\geq 1$ such that for all $M\\geq M_0$ there exists a sequence $(x_n)_{n\\in\\mathbb{N}}, x_n\\uparrow x_R,$ such that $\\frac{x_R-\\inv{G}(F(x_n))}{x_R-x_n}\\leq M$ for every $n\\in\\mathbb{N}$. A similar analysis as in (i) then shows that this contradicts relation \\eqref{eq:ratiotoL2}, and therefore $\\lim_{x\\rightarrow\\infty} \\frac{x_R-\\inv{G}(F(x))}{x_R-x} = \\infty$.\n\\end{enumerate}\n\n\n\n\\subsection{Proof of Lemma~\\ref{lem:strictlyincreasingF}}\nFor any positive, non-increasing $\\phi:[0,1)\\rightarrow (0,1)$ that vanishes as the argument tends to unity, we may define\n\\begin{equation}\n F_{\\phi}(x) := \\left\\{\\begin{array}{ll}\n F(x) & \\text{ if } x0$ we have\n\\begin{equation}\n \\P\\left(\\frac{T_\\mathrm{FB}}{\\mathbb{E}[T_\\mathrm{FB}]} > \\varepsilon \\right) \n = \\int_0^\\infty \\P(T_\\mathrm{FB}(x) > \\varepsilon \\mathbb{E}[T_\\mathrm{FB}]) \\,\\mathrm{d} F(x)\n \\leq \\P(T_\\mathrm{FB}(\\tilde{x}_\\r) > \\varepsilon \\mathbb{E}[T_\\mathrm{FB}]) + \\overline{F}(\\tilde{x}_\\r),\n \\label{eq:ToverETmain}\n\\end{equation}\nwhere the final term vanishes as ${\\r\\uparrow 1}$ by choice of $\\tilde{x}_\\r$. The proof is completed if the first probability at the right-hand side also vanishes as ${\\r\\uparrow 1}$. \n\nIn preparation of the analysis of $\\P(T_\\mathrm{FB}(\\tilde{x}_\\r) > \\varepsilon \\mathbb{E}[T_\\mathrm{FB}])$, reconsider the busy period representation $T_\\mathrm{FB}(x) \\stackrel{d}{=} \\mathcal{L}_x(W_x + x)$. The relation states that the sojourn time of a job of size $x$ is equal in distribution to a busy period with job sizes $B_i\\wedge x$, initiated by the job of size $x$ itself and the time $W_x$ required to serve all jobs already in the system up to level $x$. Here, the random variable $W_x$ is equal in distribution to the steady state waiting time in an $\\mathrm{M\/GI\/1}\/\\mathrm{FIFO}$ queue with job sizes $B\\wedge x$.\n\nLet $N_x(t)$ denote a Poisson process with rate $\\r_x\/\\mathbb{E}[B\\wedge x]$. Then, it follows from the busy period representation of $T_\\mathrm{FB}$ that\n\\begin{align*}\n \\P((1-\\r)^2T_\\mathrm{FB}(x) > y)\n &= \\P(\\mathcal{L}_x(W_x + x) > (1-\\r)^{-2}y) \\\\\n &= \\P\\left(\\inf\\left\\{t\\geq 0: \\sum_{i=1}^{N(t)}(B_i\\wedge x)-t \\leq -(W_x+x)\\right\\}> (1-\\r)^{-2}y\\right) \\\\\n &= \\P\\left(\\inf_{t\\in[0,(1-\\r)^{-2}y]}\\left\\{\\sum_{i=1}^{N(t)}(B_i\\wedge x)-t\\right\\} \\geq -(W_x+x)\\right) \\\\\n &= \\P\\left(\\sup_{t\\in[0,y]} \\left\\{\\frac{t}{(1-\\r)^2}-\\sum_{i=1}^{N((1-\\r)^{-2}t)}(B_i\\wedge x) \\right\\}\\leq W_x+x\\right). \\stepcounter{equation}\\tag{\\theequation} \\label{eq:busyperiodrepresentation}\n\\end{align*}\nAdditionally, application of Chebychev's inequality to the above relation yields\n\\begin{align*}\n \\P((1-\\r)^2T_\\mathrm{FB}(x) > y)\n &\\leq \\P\\left(\\frac{y}{(1-\\r)^2}- \\sum_{i=1}^{N((1-\\r)^{-2}y)}(B_i\\wedge x) \\leq W_x+x\\right) \\\\\n &\\leq \\P\\bigg(\\bigg|W_x + \\sum_{i=1}^{N((1-\\r)^{-2}y)}(B_i\\wedge x) - \\frac{\\r_x}{1-\\r_x}\\mathbb{E}[(B\\wedge x)^*] - \\frac{\\r_x}{(1-\\r)^2}y\\bigg| \\geq \\\\\n &\\hspace{6cm} \\frac{1-\\r_x}{(1-\\r)^2}y-x-\\frac{\\r_x}{1-\\r_x}\\mathbb{E}[(B\\wedge x)^*] \\bigg) \\\\\n &\\leq \\frac{\\mathbb{V}\\mathrm{ar}[W_x]+\\mathbb{V}\\mathrm{ar}\\left[\\sum_{i=1}^{N((1-\\r)^{-2}y)}(B_i\\wedge x)\\right]}{\\left(\\frac{1-\\r_x}{(1-\\r)^2}y-x-\\frac{\\r_x}{1-\\r_x}\\mathbb{E}[(B\\wedge x)^*]\\right)^2} \\\\\n &= \\frac{\\frac{\\r_x^2}{(1-\\r_x)^2} \\mathbb{E}[(B\\wedge x)^*]^2 + \\frac{\\r_x}{1-\\r_x}\\mathbb{E}[((B\\wedge x)^*)^2]+ \\frac{2\\r_x \\mathbb{E}[(B\\wedge x)^*]}{(1-\\r)^2} y}{\\left(\\frac{1-\\r_x}{(1-\\r)^2}y-x-\\frac{\\r_x}{1-\\r_x}\\mathbb{E}[(B\\wedge x)^*]\\right)^2}.\n \\stepcounter{equation}\\tag{\\theequation} \\label{eq:ToverETmain2}\n\\end{align*}\nAt this point, similar to the approach in Section~\\ref{sec:Tmean}, we differentiate between the finite and infinite variance cases.\n\n\n\\subsection{Finite variance}\n\\label{subsec:ToverETfinite}\nThis section considers all functions $F$ that satisfy one of the conditions in the theorem statement and have finite variance. Specifically, this excludes the case $x_R=\\infty$, $\\beta(\\overline{F})>-2$. Fix\n\\begin{equation}\n \\tilde{p}(F) := \\left\\{ \\begin{array}{ll}\n \\frac{\\beta(\\overline{F})}{\\beta(\\overline{F})+1} & \\text{ if } F\\notin \\mathrm{MDA}(\\Lambda) \\text{ and } x_R=\\infty, \\\\\n \\frac{\\beta(\\overline{F}(x_R-(\\cdot)^{-1})}{\\beta(\\overline{F}(x_R-(\\cdot)^{-1}))-1} \\quad & \\text{ if } F\\notin \\mathrm{MDA}(\\Lambda) \\text{ and } x_R<\\infty, \\text{ and } \\\\\n 1 & \\text{ if } F\\in\\mathrm{MDA}(\\Lambda),\n \\end{array} \\right.\n\\end{equation}\nand $\\tilde{\\gamma}\\in(\\tilde{p}(F)\/2, 1)$, and define $\\nu(\\r):=(1-\\r)^{\\tilde{\\gamma}}$ and $\\tilde{x}_\\r:= x_\\r^{\\nu(\\r)} = \\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu(\\r)}{\\nu(\\r)}\\right)$. Indeed $\\tilde{x}_\\r\\rightarrow x_R$, and we proceed with the analysis in \\eqref{eq:ToverETmain2}. Noting that $\\mathbb{E}[((B\\wedge x)^*)^2] = \\frac{\\mathbb{E}[(B\\wedge x)^3]}{3\\mathbb{E}[B]} \\leq \\frac{x\\mathbb{E}[B^2]}{3\\mathbb{E}[B]} = \\frac{2}{3}\\mathbb{E}[B^*]x$ and substituting $x=\\tilde{x}_\\r$, gives\n\\begin{align*}\n \\P((1-\\r)^2T_\\mathrm{FB}(\\tilde{x}_\\r) > y)\n &\\leq \\frac{\\left(\\frac{1-\\r}{1-\\r_{\\tilde{x}_\\r}}\\right)^2\\mathbb{E}[B^*]^2 + \\frac{1-\\r}{1-\\r_{\\tilde{x}_\\r}}\\frac{2}{3}\\mathbb{E}[B^*](1-\\r)\\tilde{x}_\\r + 2 \\mathbb{E}[B^*] y}{\\left(\\frac{1-\\r_{\\tilde{x}_\\r}}{1-\\r}y-(1-\\r)\\tilde{x}_\\r-\\frac{1-\\r}{1-\\r_{\\tilde{x}_\\r}}\\r_{\\tilde{x}_\\r}\\mathbb{E}[B^*]\\right)^2} \\\\\n &= \\frac{\\mathbb{E}[B^*]^2\\nu(\\r)^2 + \\frac{2}{3}\\mathbb{E}[B^*]\\nu(\\r) (1-\\r)x_\\r^{\\nu(\\r)} + 2 \\mathbb{E}[B^*] y}{\\left(\\nu(\\r)^{-1}y-(1-\\r)x_\\r^{\\nu(\\r)}-\\r_{x_\\r^{\\nu(\\r)}}\\mathbb{E}[B^*]\\nu\\right)^2}.\n\\end{align*}\n\nWe now return to the probability $\\P(T_\\mathrm{FB}(\\tilde{x}_\\r) > \\varepsilon \\mathbb{E}[T_\\mathrm{FB}])$ in relation~\\eqref{eq:ToverETmain}. By Theorems~\\ref{thm:ETMatuszewska} and \\ref{thm:ETextreme}, there exists $C>0$ such that the inequality $(1-\\r)^2\\mathbb{E}[T_\\mathrm{FB}] \\geq C \\overline{F}(\\inv{G}(\\r))$ holds true for all $\\rho$ sufficiently close to one. Denoting $\\tilde{\\varepsilon}:=\\varepsilon C$, this gives\n\\begin{align*}\n \\P(T_\\mathrm{FB}(\\tilde{x}_\\r) > \\varepsilon \\mathbb{E}[T_\\mathrm{FB}])\n &\\leq \\P((1-\\r)^2T_\\mathrm{FB}(\\tilde{x}_\\r) > \\tilde{\\varepsilon} \\overline{F}(\\inv{G}(\\r))) \\\\\n &\\leq \\frac{\\mathbb{E}[B^*]^2\\nu(\\r)^2 + \\frac{2}{3}\\mathbb{E}[B^*]\\nu(\\r) (1-\\r)x_\\r^{\\nu(\\r)} + 2 \\tilde{\\varepsilon} \\mathbb{E}[B^*] \\overline{F}(\\inv{G}(\\r))}{\\left(\\tilde{\\varepsilon}\\nu(\\r)^{-1}\\overline{F}(\\inv{G}(\\r))-(1-\\r)x_\\r^{\\nu(\\r)}-\\r_{x_\\r^{\\nu(\\r)}}\\mathbb{E}[B^*]\\nu(\\r)\\right)^2} \\\\\n &= \\frac{\\mathbb{E}[B^*]^2\\frac{\\nu(\\r)^4}{\\overline{F}(\\inv{G}(\\r))^2} + \\frac{2\\mathbb{E}[B^*]}{3} \\frac{\\nu(\\r)^3 (1-\\r)x_\\r^{\\nu(\\r)}}{\\overline{F}(\\inv{G}(\\r))^2} + 2 \\tilde{\\varepsilon} \\mathbb{E}[B^*]\\frac{\\nu(\\r)^2}{\\overline{F}(\\inv{G}(\\r))}}{\\left(\\tilde{\\varepsilon}-\\frac{\\nu(\\r)(1-\\r)x_\\r^{\\nu(\\r)}}{\\overline{F}(\\inv{G}(\\r))}-\\r_{x_\\r^{\\nu(\\r)}}\\mathbb{E}[B^*]\\frac{\\nu(\\r)^2}{\\overline{F}(\\inv{G}(\\r))}\\right)^2}.\n\\end{align*}\nSubsequently, we observe for any $\\nu\\in(0,1)$ that\n\\begin{equation}\n \\lim_{{\\r\\uparrow 1}} (1-\\r) x_\\r^{\\nu} \n = \\lim_{{\\r\\uparrow 1}} (1-\\r) \\inv{G}\\left(1-\\frac{1-\\nu}{\\nu}\\frac{1-\\r}{\\r}\\right) \n = \\lim_{z\\rightarrow x_R} \\frac{\\frac{\\nu}{1-\\nu}\\bar{G}(z) \\cdot z}{1+\\frac{\\nu}{1-\\nu}\\bar{G}(z)}\n \\leq \\lim_{z\\rightarrow x_R} \\frac{\\nu\\cdot z\\bar{G}(z)}{1-\\nu}.\n \\label{eq:zbarGz}\n\\end{equation}\nwhere $z\\bar{G}(z)\\rightarrow 0$ as $z \\rightarrow x_R$ since $\\mathbb{E}[B^2]<\\infty$ (cf.\\ Section~\\ref{subsubsec:finitevariance}). It follows that $(1-\\r) x_\\r^{\\nu(\\r)} = o(\\nu(\\r))$ as ${\\r\\uparrow 1}$, so that $\\lim_{{\\r\\uparrow 1}} \\P(T_\\mathrm{FB} > \\varepsilon \\mathbb{E}[T_\\mathrm{FB}]) = 0$ provided that $\\lim_{{\\r\\uparrow 1}} \\frac{\\nu(\\r)^2}{\\overline{F}(\\inv{G}(\\r))}=0$.\n\nWrite $x=(1-\\r)^{-1}$. By Lemma~\\ref{lem:onesidedMatuszewskavanish}, it suffices to show $\\a\\left((\\cdot)^{-2\\tilde{\\gamma}} \\overline{F}(\\inv{G}(1-(\\cdot)^{-1}))\\right) < 0$. This relation follows from Lemma~\\ref{lem:onesidedMatuszewskaproduct}, Corollary~\\ref{cor:FGinvMatuszewska} and our choice of $\\tilde{\\gamma}$:\n\\begin{align*}\n \\a\\left(\\frac{(\\cdot)^{-2\\tilde{\\gamma}} }{ \\overline{F}(\\inv{G}(1-(\\cdot)^{-1})) }\\right)\n &\\leq -2\\tilde{\\gamma} - \\beta\\left(\\overline{F}(\\inv{G}(1-(\\cdot)^{-1}))\\right)\n \\leq -2\\tilde{\\gamma} + \\tilde{p}(F)\n < 0.\n\\end{align*}\n\n\n\\subsection{Infinite variance}\nThis section regards all functions $F$ that satisfy $x_R=\\infty, \\beta(\\overline{F})>-2$. In this case, $\\tilde{x}_\\r$ can be any function that satisfies both $\\lim_{{\\r\\uparrow 1}} \\tilde{x}_\\r = \\infty$ and $\\lim_{{\\r\\uparrow 1}} \\frac{\\tilde{x}_\\r}{\\bar{G}(\\tilde{x}_\\r) \\log\\left(\\frac{1}{1-\\r}\\right)} = 0$. \n\nTheorem~\\ref{thm:ETMatuszewska} implies that there exists $C>0$ such that $\\mathbb{E}[T_\\mathrm{FB}]\\geq C \\log\\left(\\frac{1}{1-\\r}\\right)$ for all $\\r$ sufficiently close to one. Again, denote $\\tilde{\\varepsilon}=\\varepsilon C$. The analysis resumes with relation~\\eqref{eq:ToverETmain2}, where we substitute $y$ by $\\tilde{\\varepsilon} (1-\\r)^2 \\log\\left(\\frac{1}{1-\\r}\\right)$ to obtain\n\\begin{align*}\n \\P(T_\\mathrm{FB}(x) > \\varepsilon \\mathbb{E}[T_\\mathrm{FB}])\n &\\leq \\P\\left((1-\\r)^2T_\\mathrm{FB}(x) > \\tilde{\\varepsilon} (1-\\r)^2 \\log\\left(\\frac{1}{1-\\r}\\right)\\right) \\\\\n &\\leq \\frac{\\frac{1}{(1-\\r_x)^2} \\mathbb{E}[(B\\wedge x)^*]^2 + \\frac{1}{1-\\r_x}\\mathbb{E}[((B\\wedge x)^*)^2]+ 2 \\tilde{\\varepsilon} \\mathbb{E}[(B\\wedge x)^*] \\log\\left(\\frac{1}{1-\\r}\\right)}{\\left(\\tilde{\\varepsilon}(1-\\r_x)\\log\\left(\\frac{1}{1-\\r}\\right)-x-\\frac{\\r_x}{1-\\r_x}\\mathbb{E}[(B\\wedge x)^*]\\right)^2}.\n\\end{align*}\nBy relation~\\eqref{eq:m2x}, there exists a function $b(x)$ that is bounded for all $x$ sufficiently large and satisfies $m_2(x) = \\mathbb{E}[B] b(x) x \\bar{G}(x)$. As such, $\\mathbb{E}[((B\\wedge x)^*)^2]=\\frac{\\mathbb{E}[(B\\wedge x)^3]}{3\\mathbb{E}[B]}\\leq \\frac{x m_2(x)}{3\\mathbb{E}[B]}=b(x)x^2 \\bar{G}(x)\/3$ and similarly $\\mathbb{E}[(B\\wedge x)^*] = \\frac{m_2(x)}{2\\mathbb{E}[B]} = b(x)x \\bar{G}(x)\/2$. Substituting this into the above relation yields\n\\begin{align*}\n \\P(T_\\mathrm{FB}(x) > \\varepsilon \\mathbb{E}[T_\\mathrm{FB}])\n &\\leq \\frac{\\frac{b(x)^2}{4}\\frac{x^2 \\bar{G}(x)^2}{(1-\\r_x)^2} + \\frac{b(x)}{3}\\frac{x^2 \\bar{G}(x)}{1-\\r_x}+ \\tilde{\\varepsilon} b(x) x \\bar{G}(x) \\log\\left(\\frac{1}{1-\\r}\\right)}{\\left(\\tilde{\\varepsilon}(1-\\r_x)\\log\\left(\\frac{1}{1-\\r}\\right)-x-\\frac{\\r_x b(x)}{2}\\frac{x \\bar{G}(x)}{1-\\r_x}\\right)^2},\n\\end{align*}\nso that\n\\begin{multline*}\n \\P(T_\\mathrm{FB}(x) > \\varepsilon \\mathbb{E}[T_\\mathrm{FB}]) \\\\\n \\leq \\frac{\\frac{b(x)^2}{4}\\frac{\\bar{G}(x)^2}{(1-\\r_x)^2}\\frac{x^2}{(1-\\r_x)^2\\log^2\\left(\\frac{1}{1-\\r}\\right)} + \\frac{b(x)}{3}\\frac{\\bar{G}(x)}{1-\\r_x}\\frac{x^2}{(1-\\r_x)^2\\log^2\\left(\\frac{1}{1-\\r}\\right)} + \\tilde{\\varepsilon} b(x) \\frac{\\bar{G}(x)}{1-\\r_x}\\frac{x}{(1-\\r_x)\\log\\left(\\frac{1}{1-\\r}\\right)}}{\\left(\\tilde{\\varepsilon}-\\frac{x}{(1-\\r_x)\\log\\left(\\frac{1}{1-\\r}\\right)}-\\frac{\\r_x b(x)}{2}\\frac{\\bar{G}(x)}{1-\\r_x}\\frac{x}{(1-\\r_x)\\log\\left(\\frac{1}{1-\\r}\\right)} \\right)^2}.\n\\end{multline*}\nThe result follows after noting that $1-\\r_x = 1-\\r G(x) \\geq \\bar{G}(x)$ and substituting $\\tilde{x}_\\r$ for $x$. \n\n\n\n\n\n\\section{Asymptotic behaviour of the sojourn time tail}\n\\label{sec:Ttail}\nIn this section, we prove Theorem~\\ref{thm:Ttail} after presenting two facilitating propositions. The proofs of the propositions are postponed to Sections~\\ref{subsec:proofWtoExp} and \\ref{subsec:proofTtail}. Throughout this section, $\\mathbf{e}(q)$ will denote an Exponentially distributed random variable with rate $q>0$. We abuse notation by writing $\\mathbf{e}(0)=+\\infty$.\n\nReconsider the relation $T_\\mathrm{FB}(x) \\stackrel{d}{=} \\mathcal{L}_x(W_x + x)$ to gain some intuition. A rough approximation of the duration of a busy period, given $W_x + x$ units of work at time $t=0$, is $(W_x + x)\/(1-\\r_x)$. The scaled sojourn time $(1-\\r)^2T_\\mathrm{FB}(x)$ is then approximated by $\\frac{1-\\r}{1-\\r_x}(1-\\r)(W_x + x)$. As in Section~\\ref{sec:Tmean}, define $x_\\r^\\nu = \\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu}{\\nu}\\right),\\nu\\in(1-\\r,1),$ so that $\\frac{1-\\r}{1-\\r_x}=\\nu$. Then for all $\\nu\\in(0,1)$, we have $(1-\\r)^2T_\\mathrm{FB}(x_\\r^\\nu) \\stackrel{d}{\\approx} \\nu(1-\\r)(W_{x_\\r^\\nu} + x_\\r^\\nu)$. We will show that $(1-\\r)x_\\r^\\nu\\rightarrow 0$ for all fixed $\\nu\\in(0,1)$. Instead, the following proposition shows that $(1-\\r)W_{x_\\r^\\nu}$ behaves as an exponentially distributed random variable as ${\\r\\uparrow 1}$:\n\n\\begin{proposition} \\label{prop:WtoExp}\nLet $x_\\r^\\nu = \\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu}{\\nu}\\right),\\nu\\in(1-\\r,1),$ and let $W_x^\\r$ denote the steady state waiting time in an $\\mathrm{M\/GI\/1}\/\\mathrm{FIFO}$ queue with job sizes $B_i\\wedge x$ and arrival rate $\\r_x\/\\mathbb{E}[B\\wedge x]$. Then, for any fixed $\\nu\\in(0,1)$,\n\n (1-\\r)W_{x_\\r^\\nu} \\stackrel{d}{\\rightarrow} \\Exp((\\nu\\mathbb{E}[B^*])^{-1})\n\nas ${\\r\\uparrow 1}$.\n\\end{proposition}\n\nIf $W^\\r=W_\\infty^\\r$ denotes the steady state waiting time in the non-truncated system, then \\citet{kingman1961single} proved that $(1-\\r)W^\\r\\stackrel{d}{\\rightarrow} \\Exp(\\mathbb{E}[B^*]^{-1})$. Proposition~\\ref{prop:WtoExp} shows how jobs can be truncated such that the exponential behaviour is preserved, and quantifies how the truncation affects the parameter of the exponential distribution. \n\nSubstituting the result in Proposition~\\ref{prop:WtoExp} into our approximation yields $(1-\\r)^2T_\\mathrm{FB}(x_\\r^\\nu) \\stackrel{d}{\\approx} \\Exp((\\nu^2 \\mathbb{E}[B^*])^{-1})$ for every fixed $\\nu\\in(0,1)$. We will show that the fraction of jobs for which $\\nu$ is in $(\\varepsilon,1-\\varepsilon)$ scales as $\\overline{F}(\\inv{G}(\\r))$, and that the contribution of other jobs to the tail of $(1-\\r)^2T_\\mathrm{FB}$ is negligible. The result is presented in Proposition~\\ref{prop:Ttail}, where we focus on the probability $\\P((1-\\r)^2 T_\\mathrm{FB} > \\mathbf{e}(q))$ for its connection to the Laplace transform of $T_\\mathrm{FB}^*$.\n\n\\begin{proposition} \\label{prop:Ttail}\nAssume $F\\in\\mathrm{MDA}(H)$, where $H$ is an extreme value distribution. Let $p(H)=\\frac{\\a}{\\a-1}$ if $H=\\Phi_\\a,\\a>2$; $p(H)=1$ if $H=\\Lambda$ and $p(H)=\\frac{\\a}{\\a+1}$ if $H=\\Psi_\\a,\\a>0$. Then\n\\begin{equation}\n \\lim_{{\\r\\uparrow 1}} \\frac{\\P((1-\\r)^2 T_\\mathrm{FB} > \\mathbf{e}(q))}{\\overline{F}(\\inv{G}(\\r))} \n = \\int_0^1 \\frac{8\\mathbb{E}[B^*]q \\nu}{\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}\\left(\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}+1\\right)^2} \\left(\\frac{1-\\nu}{\\nu}\\right)^{p(H)} \\,\\mathrm{d} \\nu\n\\end{equation}\nfor all $q\\geq 0$. Here, the integral is finite for all $q\\geq 0$.\n\\end{proposition}\n\nWe are now ready to prove Theorem~\\ref{thm:Ttail}. Using the relation $\\mathbb{E}[e^{-qY}] = \\P(\\mathbf{e}(q)>Y)$, \none sees that $\\P((1-\\r)^2 T^\\r_{\\mathrm{FB}} > \\mathbf{e}(q)) = 1-\\mathbb{E}[e^{-q(1-\\r)^2 T^\\r_{\\mathrm{FB}}}]$ and consequently\n\\begin{align*}\n \\frac{\\P((1-\\r)^2T_\\mathrm{FB}> \\mathbf{e}(q))}{\\overline{F}(\\inv{G}(\\r))}\n &= \\frac{(1-\\r)^2 \\mathbb{E}[T_\\mathrm{FB}]}{\\overline{F}(\\inv{G}(\\r))} \\cdot \\frac{1-\\mathbb{E}\\left[e^{-q(1-\\r)^2 T_\\mathrm{FB}}\\right]}{(1-\\r)^2 \\mathbb{E}[T_\\mathrm{FB}]} \\\\\n &= \\frac{(1-\\r)^2 \\mathbb{E}[T_\\mathrm{FB}]}{\\overline{F}(\\inv{G}(\\r))} \\cdot q \\cdot \\mathbb{E}\\left[e^{-q(1-\\r)^2 T_\\mathrm{FB}^*}\\right],\n\\end{align*}\nwhere $T_\\mathrm{FB}^*$ is the residual sojourn time and has density $\\P(T_\\mathrm{FB}>t)\/\\mathbb{E}[T_\\mathrm{FB}]$. \nConsequently,\n\\begin{align*}\n \\lim_{{\\r\\uparrow 1}} \\mathbb{E}&\\left[e^{-q(1-\\r)^2 T_\\mathrm{FB}^*}\\right] \\\\\n &= \\lim_{{\\r\\uparrow 1}} \\frac{\\overline{F}(\\inv{G}(\\r))}{(1-\\r)^2 \\mathbb{E}[T_\\mathrm{FB}]} \\int_0^1 \\frac{8\\mathbb{E}[B^*] \\nu}{\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}\\left(\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}+1\\right)^2} \\left(\\frac{1-\\nu}{\\nu}\\right)^{p(H)} \\,\\mathrm{d} \\nu \\\\\n &= r(H)^{-1} \\int_0^1 \\frac{8\\nu}{\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}\\left(\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}+1\\right)^2} \\left(\\frac{1-\\nu}{\\nu}\\right)^{p(H)} \\,\\mathrm{d} \\nu \\stepcounter{equation}\\tag{\\theequation} \\label{eq:LaplaceTstar}\n\\end{align*}\nfor all $q\\geq 0$, where $r(H)$ was introduced in Theorem~\\ref{thm:ETextreme}. It follows from Sections~\\ref{subsubsec:ETFrechet} and \\ref{subsubsec:ETGumbel} that $\\lim_{q\\downarrow 0}\\lim_{{\\r\\uparrow 1}} \\mathbb{E}\\left[e^{-q(1-\\r)^2 T_\\mathrm{FB}^*}\\right]=1$. Additionally, the right-hand side is continuous in $q$, so that $(1-\\r)^2 T_\\mathrm{FB}^*$ converges to some non-degenerate random variable by the Continuity Theorem.\n\nThe Laplace transform inversion formula (12) in \\citet[p.234]{bateman1954tables} states that $f(t) = \\frac{2}{\\sqrt{\\pi}}\\sqrt{t} - 2te^t \\erfc(\\sqrt{t})$ is the Laplace inverse of $s^{-1\/2}(s^{1\/2}+1)^{-2}$,\\ i.e., \n\n \\int_0^\\infty e^{-qt} f(t) \\,\\mathrm{d} t = \\frac{1}{\\sqrt{q}\\left(\\sqrt{q}+1\\right)^2}.\n$\nConsequently, we have\n\\begin{equation}\n \\int_0^\\infty e^{-qt} g(t,\\nu) \\,\\mathrm{d} t\n = \\frac{1}{\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}\\left(\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}+1\\right)^2}\n\\end{equation}\nfor \n\n g(t,\\nu) \n = \\frac{e^{-\\frac{t}{4\\mathbb{E}[B^*]\\nu^2}}}{4\\mathbb{E}[B^*]\\nu^2} f\\left(\\frac{t}{4\\mathbb{E}[B^*]\\nu^2}\\right),\n$\nand hence relation~\\eqref{eq:LaplaceTstar} may be rewritten as\n\\begin{align*}\n \\lim_{{\\r\\uparrow 1}} \\mathbb{E}\\left[e^{-q(1-\\r)^2 T_\\mathrm{FB}^*}\\right]\n \n \n &= \\int_0^\\infty e^{-qt} \\left[\\int_0^1 8 r(H)^{-1} \\nu g(t,\\nu) \\left(\\frac{1-\\nu}{\\nu}\\right)^{p(H)} \\,\\mathrm{d} \\nu \\right] \\,\\mathrm{d} t \n =: \\int_0^\\infty e^{-qt} g^*(t) \\,\\mathrm{d} t.\n\\end{align*}\nWe conclude that the limiting random variable $\\lim_{{\\r\\uparrow 1}} (1-\\r)^2T_\\mathrm{FB}^*$ has density $g^*$. Furthermore, as \n\\begin{align*}\n \\lim_{{\\r\\uparrow 1}} \\mathbb{E}\\left[e^{-q(1-\\r)^2 T_\\mathrm{FB}^*}\\right]\n &= \\lim_{{\\r\\uparrow 1}} \\int_0^\\infty e^{-q\\tau} \\frac{\\P((1-\\r)^2T_\\mathrm{FB}>\\tau)}{(1-\\r)^2\\mathbb{E}[T_\\mathrm{FB}]} \\,\\mathrm{d} \\tau \\\\\n &= \\lim_{{\\r\\uparrow 1}} \\int_0^\\infty e^{-q\\tau} \\frac{\\P((1-\\r)^2T_\\mathrm{FB}>\\tau)}{r(H)\\mathbb{E}[B^*]\\overline{F}(\\inv{G}(\\r))} \\,\\mathrm{d} \\tau,\n\\end{align*}\nfor all $q\\geq 0$, we also see that $\\lim_{{\\r\\uparrow 1}} \\frac{\\P((1-\\r)^2T_\\mathrm{FB}>y)}{r(H)\\mathbb{E}[B^*]\\overline{F}(\\inv{G}(\\r))} = g^*(y)$ almost everywhere. \n\nTo see that $g^*$ is monotone, it suffices to show that $f(t)$ is monotone. To this end, we exploit the continued fraction representation (13.2.20a) in \\citet{cuyt2008handbook} and find\n\\begin{align}\n \\erfc(x) &= \\frac{x}{\\sqrt{\\pi}} e^{-x^2} \\cfrac{1}{x^2+\\cfrac{1\/2}{1+\\cfrac{1}{x^2+\\cfrac{3\/2}{1+\\dots}}}} \n \\geq\n \n \\frac{e^{-x^2}}{x\\sqrt{\\pi}} \\left(1-\\frac{x^2+3\/2}{2x^4+6x^2+3\/2}\\right).\n\\end{align}\nAs a consequence, one sees that\n\\begin{align*}\n \\frac{\\,\\mathrm{d}}{\\,\\mathrm{d} t} f(t)\n &= \\frac{1+2t}{\\sqrt{\\pi}\\sqrt{t}} - 2(1+t)e^t \\erfc(\\sqrt{t}) \n \\leq \\frac{1+2t - 2(1+t)\\left(1-\\frac{t+3\/2}{2t^2+6t+3\/2}\\right)}{\\sqrt{\\pi}\\sqrt{t}} \n = \\frac{-1+\\frac{2t^2+5t+3}{2t^2+6t+3\/2}}{\\sqrt{\\pi}\\sqrt{t}},\n \n\\end{align*}\nwhich is negative for all $t\\geq 0$. We conclude the section with the postponed proofs of Propositions~\\ref{prop:WtoExp} and \\ref{prop:Ttail}.\n\n\n\\subsection{Proof of Proposition~\\ref{prop:WtoExp}}\n\\label{subsec:proofWtoExp}\nThe Pollaczek-Khintchine formula states that $\\mathbb{E}[e^{-s(1-\\r) W_x}] = \\frac{1-\\r_x}{1-\\r_x\\mathbb{E}[e^{-s(1-\\r)(B\\wedge x)^*}]}$. In this representation, we expand the Laplace-Stieltjes transform $\\mathbb{E}[e^{-s(1-\\r)(B\\wedge x)^*}]$ around $\\r=1$ to find\n\\begin{align*}\n \\mathbb{E}[e^{-s(1-\\r)W_x}] = \\frac{1-\\r_x}{1-\\r_x\\left(1-\\mathbb{E}[(B\\wedge x)^*] (1-\\r)s + o(1-\\r) \\right)}\n\\end{align*}\nand hence\n\\begin{align*}\n \\mathbb{E}[e^{-s (1-\\r)W_{x_\\r^\\nu}}] \n &= \\frac{1}{1+ \\frac{1-\\r}{1-\\r_{x_\\r^\\nu}} \\r_{x_\\r^\\nu}\\mathbb{E}[(B\\wedge x_\\r^\\nu)^*] s + o\\left(\\frac{1-\\r}{1-\\r_{x_\\r^\\nu}}\\right)} \n = \\frac{1}{1+ \\nu \\r_{x_\\r^\\nu}\\mathbb{E}[(B\\wedge x_\\r^\\nu)^*] s + o(1)},\n\\end{align*}\nwhere $o(1)$ vanishes as ${\\r\\uparrow 1}$. By definition of $x_\\r^\\nu$, $x_\\r^\\nu\\rightarrow \\infty$ and $\\r_{x_\\r^\\nu}\\uparrow 1$ as ${\\r\\uparrow 1}$ for any fixed $\\nu\\in(0,1)$. In particular, $\\lim_{{\\r\\uparrow 1}} \\mathbb{E}[e^{-s (1-\\r)W_{x_\\r^\\nu}}] = \\frac{1}{1+\\nu \\mathbb{E}[B^*] s}$. The proof is completed by applying the Continuity Theorem.\n\n\n\\subsection{Proof of Proposition~\\ref{prop:Ttail}}\n\\label{subsec:proofTtail}\nWe require functions $\\nu_l(\\r)\\downarrow 0$ and $\\nu_u(\\r)\\uparrow 1$ that distinguish the jobs that significantly contribute to the tail of $(1-\\r)^2T_\\mathrm{FB}$, and those that don't. For the former function, fix $\\gamma\\in(p(H)\/2,1)$ and let $\\nu_l(\\r)= (1-\\r)^{\\gamma}$ as in Section~\\ref{subsec:ToverETfinite}. This is possible as $p(H)<2$ for all $H$ to which the theorem applies. For the latter function, we refer to relation~\\eqref{eq:zbarGz} to verify that there exists a function $\\nu(\\r)\\uparrow 1$ such that $(1-\\r)x_\\r^{\\nu(\\r)} \\rightarrow 0$. Let $\\nu_u(\\r)$ be a function with this property, and write\n\\begin{align*}\n \\frac{\\P((1-\\r)^2 T_\\mathrm{FB} > \\mathbf{e}(q))}{\\overline{F}(\\inv{G}(\\r))} \\hspace{-104pt} & \\hspace{104pt} \\\\\n &= \\int_{\\nu=0}^{\\nu_l(\\r)} \\P((1-\\r)^2 T_\\mathrm{FB}(x_\\r^\\nu) > \\mathbf{e}(q)) \\frac{\\,\\mathrm{d} F(x_\\r^\\nu)}{\\overline{F}(\\inv{G}(\\r))} \n + \\int_{\\nu=\\nu_l(\\r)}^{\\nu_u(\\r)} \\P((1-\\r)^2 T_\\mathrm{FB}(x_\\r^\\nu) > \\mathbf{e}(q)) \\frac{\\,\\mathrm{d} F(x_\\r^\\nu)}{\\overline{F}(\\inv{G}(\\r))} \\\\\n &\\qquad + \\int_{\\nu=\\nu_u(\\r)}^1 \\P((1-\\r)^2 T_\\mathrm{FB}(x_\\r^\\nu) > \\mathbf{e}(q)) \\frac{\\,\\mathrm{d} F(x_\\r^\\nu)}{\\overline{F}(\\inv{G}(\\r))} \n =: \\hat{\\mathrm{I}}(\\r) + \\hat{\\mathrm{II}}(\\r) + \\hat{\\mathrm{III}}(\\r). \\stepcounter{equation}\\tag{\\theequation}\n\\end{align*}\nThe next paragraphs study the behaviour of $\\P((1-\\r)^2 T_\\mathrm{FB}(x) > \\mathbf{e}(q))$, which will then facilitate the analysis of the above three regions. Specifically, we will derive the asymptotic behaviour of $\\hat{\\mathrm{II}}(\\r)$ in terms of $q$, and show that $\\hat{\\mathrm{I}}(\\r)+\\hat{\\mathrm{III}}(\\r)=o(1)$ for any $q\\geq 0$. \n\nFrom relation~\\eqref{eq:busyperiodrepresentation}, we know that \n\\begin{equation}\n \\P((1-\\r)^2T_\\mathrm{FB}(x) > \\mathbf{e}(q))\n = \\P\\left(\\sup_{t\\in[0,\\mathbf{e}(q)]} X_x^\\r(t) \\leq (1-\\r)W_x+(1-\\r)x\\right),\n\\end{equation}\nwhere $X_x^\\r(t):=\\frac{t}{1-\\r}-\\sum_{i=1}^{N((1-\\r)^{-2}t)}(1-\\r)(B_i\\wedge x)$. Then $X_x^\\r(t)$ is a spectrally negative L{\\'e}vy process, and therefore relation~(8.4) in \\citet{kyprianou2014introductory} implies\n\\begin{equation}\n \\P((1-\\r)^2 T^\\r_{\\mathrm{FB}}(x) > \\mathbf{e}(q)) \n = \\P\\left(\\mathbf{e}(\\Phi(x,\\r,q)) \\leq (1-\\r)W_x + (1-\\r)x \\right), \\label{eq:tailexponentialtime}\n\\end{equation}\nwhere $\\Phi(x,\\r,q) := \\sup\\{s\\geq 0: \\psi(x,\\r,s) = q\\}$ is the right inverse of the Laplace exponent $\\psi(s) := t^{-1} \\log \\mathbb{E}[e^{s X^\\r_x(t)}]$ of $X_x^\\r(t)$. Since\n\\begin{align*}\n \\psi(x,\\r,s) \n &= t^{-1} \\log \\mathbb{E}\\left[e^{\\frac{st}{1-\\r} - \\sum_{i=1}^{N((1-\\r)^{-2}t)} (1-\\r)s(B_i\\wedge x)}\\right] \\\\\n &= \\frac{s}{1-\\r} + t^{-1} \\log \\mathbb{E}\\left[e^{- \\sum_{i=1}^{N((1-\\r)^{-2}t)} (1-\\r)s(B_i\\wedge x)}\\right]\n\\end{align*}\nand\n\\begin{align*}\n \\mathbb{E}[e^{- \\sum_{i=1}^{N((1-\\r)^{-2}t)} (1-\\r)s(B_i\\wedge x)}]\n &= \\sum_{n=0}^\\infty \\mathbb{E}[e^{-(1-\\r)s(B\\wedge x)}]^n \\frac{\\left(\\frac{\\l t}{(1-\\r)^2}\\right)^n}{n!} e^{-\\frac{\\l t}{(1-\\r)^2}} \\\\\n &= e^{-\\frac{\\l t}{(1-\\r)^2}\\left(1-\\mathbb{E}[e^{-(1-\\r)s(B\\wedge x)}]\\right)},\n\\end{align*}\nwe obtain $\\psi(x,\\r,s) = \\frac{s}{1-\\r} - \\frac{\\l}{(1-\\r)^2}\\left(1-\\mathbb{E}[e^{-(1-\\r)s(B\\wedge x)}]\\right)$. A Taylor expansion around $\\r=1$ now yields \n\\begin{align*}\n \\psi(x,\\r,s) \\hspace{-26pt} & \\\\\n &= \\frac{s}{1-\\r} - \\frac{\\l}{(1-\\r)^2}\\left(1-\\left(1-(1-\\r)s\\mathbb{E}[B\\wedge x] + \\frac{(1-\\r)^2s^2}{2}\\mathbb{E}[(B\\wedge x)^2] + o((1-\\r)^2s^2) \\right)\\right) \\\\\n &= \\frac{s}{1-\\r} - \\left(\\frac{\\r_x s}{1-\\r} - \\frac{\\l\\mathbb{E}[(B\\wedge x)^2]}{2}s^2 + o(s^2)\\right) \n = \\frac{1-\\r_x}{1-\\r}s + \\frac{\\r\\mathbb{E}[(B\\wedge x)^2]}{2\\mathbb{E}[B]}s^2 + o(s^2),\n\\end{align*}\nso that $\\lim_{{\\r\\uparrow 1}} \\psi(x_\\r^\\nu,\\r,s) = \\nu^{-1} s + \\mathbb{E}[B^*]s^2$ for all $\\nu>0$, and consequently\n\\begin{equation}\n \\lim_{{\\r\\uparrow 1}} \\Phi(x_\\r^\\nu,\\r,q) = \\frac{\\sqrt{\\nu^{-2}+4\\mathbb{E}[B^*]q}-\\nu^{-1}}{2\\mathbb{E}[B^*]}\n = \\frac{\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}-1}{2\\mathbb{E}[B^*]\\nu}\n =: \\Phi(\\nu,q).\n \\label{eq:phinonzero}\n\\end{equation}\nSimilarly, one deduces that $\\lim_{{\\r\\uparrow 1}} \\nu_l(\\r) \\psi(x_\\r^{\\nu_l(\\r)},\\r,s) = s$ and\n\\begin{equation}\n \\lim_{{\\r\\uparrow 1}} \\nu_l(\\r)^{-1}\\Phi(x_\\r^{\\nu_l(\\r)},\\r,q) = q.\n \\label{eq:phizero}\n\\end{equation}\nWe now gathered sufficient tools to analyse the asymptotic behaviour of $\\hat{\\mathrm{II}}(\\r)$. \n\nFix $\\varepsilon\\in(0,1\/3)$. We have already shown that $(1-\\r)W^\\r_{x_\\r^\\nu} \\rightarrow \\mathbf{e}((\\nu \\mathbb{E}[B^*])^{-1})$ and $(1-\\r)x_\\r^\\nu \\rightarrow 0$ as ${\\r\\uparrow 1}$ for all $\\nu \\in(0,1)$. Since $\\mathbf{e}(q_1)\\leq_{st} \\mathbf{e}(q_2)$ whenever $q_1\\geq q_2$, relations~\\eqref{eq:tailexponentialtime} and \\eqref{eq:phinonzero} imply\n\\begin{align*}\n \\P((1-\\r)^2 T^\\r_{\\mathrm{FB}}(x_\\r^\\nu) > \\mathbf{e}(q)) \\hspace{-60pt} & \\hspace{60pt}\n \\leq \\P\\left(\\mathbf{e}((1+\\varepsilon)\\Phi(\\nu,q)) \\leq \\mathbf{e}((1-\\varepsilon)(\\nu \\mathbb{E}[B^*])^{-1}) + \\varepsilon \\right) \\\\\n &= e^{-(1+\\varepsilon)\\varepsilon\\Phi(\\nu,q)}\\frac{(1+\\varepsilon)\\Phi(\\nu,q)}{(1+\\varepsilon)\\Phi(\\nu,q) + (1-\\varepsilon)(\\nu \\mathbb{E}[B^*])^{-1}} + 1 - e^{-\\varepsilon(1+\\varepsilon)\\Phi(\\nu,q)} \\\\\n &\\leq \\frac{\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}-1}{\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}+1-\\frac{4\\varepsilon}{1+\\varepsilon}} + 1 - e^{-\\varepsilon\\cdot \\frac{\\sqrt{1+4\\mathbb{E}[B^*]q\\nu^2}-1}{\\mathbb{E}[B^*]\\nu}}\n\\end{align*}\nfor all $\\rho\\geq \\rho_\\varepsilon$, $\\rho_\\varepsilon$ sufficiently close to one. Consequently, for all $\\rho\\geq \\rho_\\varepsilon$,\n\\begin{align*}\n \\hat{\\mathrm{II}}(\\r)\n &\\leq \\int_{\\nu_l(\\r)}^{\\nu_u(\\r)} \\frac{\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}-1}{\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}+1-\\frac{4\\varepsilon}{1+\\varepsilon}} \\frac{\\,\\mathrm{d} F\\left(\\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu}{\\nu}\\right)\\right)}{\\overline{F}(\\inv{G}(\\r))} \\\\\n &\\qquad + \\int_{\\nu_l(\\r)}^{\\nu_u(\\r)} \\left(1 - e^{-\\varepsilon\\cdot \\frac{\\sqrt{1+4\\mathbb{E}[B^*]q\\nu^2}-1}{\\mathbb{E}[B^*]\\nu}}\\right) \\frac{\\,\\mathrm{d} F\\left(\\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu}{\\nu}\\right)\\right)}{\\overline{F}(\\inv{G}(\\r))} \\\\\n &\\leq -\\left[\\frac{\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}-1}{\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}+1-\\frac{4\\varepsilon}{1+\\varepsilon}} \\frac{\\overline{F}\\left(\\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu}{\\nu}\\right)\\right)}{\\overline{F}(\\inv{G}(\\r))} \\right]_{\\nu=\\nu_l(\\r)}^{\\nu_u(\\r)} \\\\\n &\\qquad + \\int_{\\nu_l(\\r)}^{\\nu_u(\\r)} \\frac{8\\mathbb{E}[B^*]q \\nu}{\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}\\left(\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}+1-\\frac{4\\varepsilon}{1+\\varepsilon}\\right)^2} \\frac{\\overline{F}\\left(\\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu}{\\nu}\\right)\\right)}{\\overline{F}(\\inv{G}(\\r))} \\,\\mathrm{d} \\nu \\\\\n &\\qquad \\qquad - \\left[\\left(1 - e^{-\\varepsilon\\cdot \\frac{\\sqrt{1+4\\mathbb{E}[B^*]q\\nu^2}-1}{\\mathbb{E}[B^*]\\nu}}\\right) \\frac{\\overline{F}\\left(\\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu}{\\nu}\\right)\\right)}{\\overline{F}(\\inv{G}(\\r))}\\right]_{\\nu=\\nu_l(\\r)}^{\\nu_u(\\r)} \\\\\n &\\qquad \\qquad \\qquad + 4q \\int_{\\nu_l(\\r)}^{\\nu_u(\\r)} \\varepsilon \\cdot e^{-\\varepsilon\\cdot\\frac{\\sqrt{1+4\\mathbb{E}[B^*]q\\nu^2}-1}{\\mathbb{E}[B^*]\\nu}} \\frac{\\overline{F}\\left(\\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu}{\\nu}\\right)\\right)}{\\overline{F}(\\inv{G}(\\r))} \\,\\mathrm{d} \\nu.\n\\end{align*}\nIn Sections~\\ref{subsubsec:ETFrechet} and \\ref{subsubsec:ETGumbel}, we deduced that $\\overline{F}(\\inv{G}(1-(\\cdot)^{-1}))$ is regularly varying with index $-p(H)$. The Uniform Convergence Theorem then implies\n\\begin{align*}\n \\limsup_{{\\r\\uparrow 1}} \\hat{\\mathrm{II}}(\\r)\n &\\leq -\\left[\\frac{\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}-1}{\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}+1-\\frac{4\\varepsilon}{1+\\varepsilon}} \\left(\\frac{1-\\nu}{\\nu}\\right)^{p(H)} \\right]_{\\nu=0}^1 \\\\\n &\\qquad + \\int_0^1 \\frac{8\\mathbb{E}[B^*]q \\nu}{\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}\\left(\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}+1-\\frac{4\\varepsilon}{1+\\varepsilon}\\right)^2} \\left(\\frac{1-\\nu}{\\nu}\\right)^{p(H)} \\,\\mathrm{d} \\nu \\\\\n &\\qquad \\qquad - \\left[\\left(1 - e^{-\\varepsilon\\cdot \\frac{\\sqrt{1+4\\mathbb{E}[B^*]q\\nu^2}-1}{\\mathbb{E}[B^*]\\nu}}\\right) \\left(\\frac{1-\\nu}{\\nu}\\right)^{p(H)}\\right]_{\\nu=0}^1 \\\\\n &\\qquad \\qquad \\qquad + 4q \\int_0^1 \\varepsilon \\cdot e^{-\\varepsilon\\cdot \\frac{\\sqrt{1+4\\mathbb{E}[B^*]q\\nu^2}-1}{\\mathbb{E}[B^*]\\nu}} \\left(\\frac{1-\\nu}{\\nu}\\right)^{p(H)} \\,\\mathrm{d} \\nu \\\\ \n &= \\int_0^1 \\frac{8\\mathbb{E}[B^*]q \\nu}{\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}\\left(\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}+1-\\frac{4\\varepsilon}{1+\\varepsilon}\\right)^2} \\left(\\frac{1-\\nu}{\\nu}\\right)^{p(H)} \\,\\mathrm{d} \\nu \\\\\n &\\qquad + 4q \\int_0^1 \\varepsilon \\cdot e^{-\\varepsilon\\cdot \\frac{\\sqrt{1+4\\mathbb{E}[B^*]q\\nu^2}-1}{\\mathbb{E}[B^*]\\nu}} \\left(\\frac{1-\\nu}{\\nu}\\right)^{p(H)} \\,\\mathrm{d} \\nu.\n\\end{align*}\nNow, both integrals are bounded for all $\\varepsilon\\in(0,1\/3)$ and all $q\\geq 0$. Additionally, both integrands are increasing in $\\varepsilon$ for all $\\varepsilon$ sufficiently small. One may thus take the limit $\\varepsilon\\downarrow 0$ and apply the Dominated Convergence Theorem to find\n\\begin{equation}\n \\limsup_{{\\r\\uparrow 1}} \\hat{\\mathrm{II}}(\\r) \n \\leq \\int_0^1 \\frac{8\\mathbb{E}[B^*]q \\nu}{\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}\\left(\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}+1\\right)^2} \\left(\\frac{1-\\nu}{\\nu}\\right)^{p(H)} \\,\\mathrm{d} \\nu.\n\\end{equation}\n\nSimilarly, one may show that\n\\begin{equation*}\n \\liminf_{{\\r\\uparrow 1}} \\hat{\\mathrm{II}}(\\r)\n \\geq \\int_0^1 \\frac{8\\mathbb{E}[B^*]q \\nu}{\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}\\left(\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}+1+\\frac{4\\varepsilon}{1-\\varepsilon}\\right)^2} \\left(\\frac{1-\\nu}{\\nu}\\right)^{p(H)} \\,\\mathrm{d} \\nu,\n\\end{equation*}\nand we conclude\n\\begin{equation}\n \\lim_{{\\r\\uparrow 1}} \\hat{\\mathrm{II}}(\\r)\n = \\int_0^1 \\frac{8\\mathbb{E}[B^*]q \\nu}{\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}\\left(\\sqrt{1+4\\mathbb{E}[B^*]q \\nu^2}+1\\right)^2} \\left(\\frac{1-\\nu}{\\nu}\\right)^{p(H)} \\,\\mathrm{d} \\nu.\n\\end{equation}\n\nSecond, consider $\\hat{\\mathrm{I}}(\\r)$. Define $M(\\rho):=(1-\\r)^{-\\hat{\\gamma}}$ for some $\\hat{\\gamma}\\in\\left(p(H)\/2,\\gamma\\right)$ and recall that $(1-\\r)x\\rightarrow 0$ and $(1-\\r)W_x^\\r \\stackrel{d}{\\rightarrow} 0$ (and hence in probability) for all $x\\leq x_\\r^{\\nu_l(\\r)}$. Thus, for all $x\\leq x_\\r^{\\nu_l}$ and all $\\rho$ sufficiently large, we have\n\\begin{align*}\n \\hat{\\mathrm{I}}(\\r)\n &= \\int_{\\nu=0}^{\\nu_l(\\r)} \\P\\left(\\mathbf{e}(\\Phi(x_\\r^\\nu,\\r,q)) \\leq (1-\\r)W_{x_\\r^\\nu} + (1-\\r)x_\\r^\\nu \\right) \\frac{\\,\\mathrm{d} F(x_\\r^\\nu)}{\\overline{F}(\\inv{G}(\\r))} \\\\\n &\\leq \\frac{\\P\\left(\\mathbf{e}(\\Phi(x_\\r^{\\nu_l(\\r)},\\r,q)) \\leq 2 M(\\r) \\right)}{\\overline{F}(\\inv{G}(\\r))} + \\frac{\\P((1-\\r)W^\\r_x \\geq M(\\r))}{\\overline{F}(\\inv{G}(\\r))}\n =: \\hat{\\mathrm{Ia}}(\\r) + \\hat{\\mathrm{Ib}}(\\r)\n\\end{align*}\nFix $\\d\\in\\left(0,p(H)-\\gamma-\\hat{\\gamma} \\right)$. Potter's Theorem \\citep[Theorem~1.5.6]{bingham1989regular} states that $\\overline{F}(\\inv{G}(\\r))\\geq C(1-\\r)^{p(H)+\\d}$ for some constant $C>0$ and all $\\rho$ sufficiently close to one. Also, one may readily deduce from relation~\\eqref{eq:phizero} that $\\mathbf{e}(\\Phi(x_\\r^{\\nu_l(\\r)},\\r,q))\\geq_{st} \\mathbf{e}(2q\\nu_l(\\r))$ for all $x\\leq x_\\r^{\\nu_l(\\r)}$ and $\\rho$ sufficiently large. Consequently,\n\\begin{align*}\n \\limsup_{{\\r\\uparrow 1}} \\hat{\\mathrm{Ia}}(\\r)\n &\\leq \\limsup_{{\\r\\uparrow 1}} \\frac{1-e^{-4q\\nu_l(\\r)M(\\r)}}{\\overline{F}(\\inv{G}(\\r))} \n \\leq \\lim_{{\\r\\uparrow 1}} \\frac{1-e^{-4q(1-\\r)^{\\gamma-\\hat{\\gamma}}}}{C(1-\\r)^{p(H)+\\d}} \\\\\n &= \\lim_{{\\r\\uparrow 1}} \\frac{4q(\\gamma-\\hat{\\gamma})(1-\\r)^{\\gamma-\\hat{\\gamma}-1}e^{-4q(1-\\r)^{\\gamma-\\hat{\\gamma}}}}{C\\left(p(H)+\\d\\right)(1-\\r)^{p(H)-1+\\d}} \\\\\n &= \\lim_{{\\r\\uparrow 1}} \\frac{4q(\\gamma-\\hat{\\gamma})}{C\\left(p(H)+\\d\\right)} \\cdot \\exp\\left[-4q(1-\\r)^{\\gamma-\\hat{\\gamma}} + \\left(\\gamma-\\hat{\\gamma}-p(H)-\\d\\right)\\log(1-\\r)\\right] \n = 0.\n\\end{align*}\nFor term $\\hat{\\mathrm{Ib}}(\\r)$, we apply Markov's inequality and Potter's Theorem to obtain\n\\begin{align*}\n \\limsup_{{\\r\\uparrow 1}} \\hat{\\mathrm{Ib}}(\\r)\n &\\leq \\limsup_{{\\r\\uparrow 1}} \\frac{\\frac{1-\\r}{1-\\r_x}\\r_x\\mathbb{E}[(B\\wedge x)^*]}{M(\\r) \\overline{F}(\\inv{G}(\\r))} \n \\leq \\lim_{{\\r\\uparrow 1}} C_1 \\frac{\\mathbb{E}[B^*] \\nu_l(\\r) }{M(\\r) (1-\\r)^{p(H)+\\d}} \\\\\n &= \\lim_{{\\r\\uparrow 1}} C_1 \\mathbb{E}[B^*] (1-\\r)^{\\gamma+\\hat{\\gamma}-p(H)-\\d}\n = 0.\n\\end{align*}\n\nFinally, consider term $\\hat{\\mathrm{III}}(\\r)$. For this term, the claim follows rapidly from the Uniform Convergence Theorem and the property $\\nu_u(\\r)\\uparrow 1$:\n\\begin{align*}\n \\limsup_{{\\r\\uparrow 1}} \\hat{\\mathrm{III}}(\\r) \n &\\leq \\limsup_{{\\r\\uparrow 1}} \\frac{\\overline{F}(x_\\r^{\\nu_u})}{\\overline{F}(\\inv{G}(\\r))} \n = \\limsup_{{\\r\\uparrow 1}} \\frac{\\overline{F}\\left(\\inv{G}\\left(1-\\frac{1-\\r}{\\r}\\frac{1-\\nu_u(\\r)}{\\nu_u(\\r)}\\right)\\right)}{\\overline{F}(\\inv{G}(\\r))} \\\\\n &= \\limsup_{{\\r\\uparrow 1}} \\left(\\frac{1-\\nu_u(\\r)}{\\r \\nu_u(\\r)}\\right)^{p(H)}\n = 0.\n\\end{align*}\nThis concludes the proof.\n\n\n\n\n\n\\section*{Acknowledgements}\nThe work of Bart Kamphorst is part of the free competition research programme with project number 613.001.219, which is financed by the Netherlands Organisation for Scientific Research (NWO). The research of Bert Zwart is partly supported by the NWO VICI grant 639.033.413.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe aim of this paper is to propose a continuation of the work initiated in \\cite{DDN06,DDNSV10,DegDelDoy} focusing on the derivation of asymptotic preserving schemes for kinetic plasma descriptions in the quasi-neutral limit. The purpose of these numerical methods is to provide a quasi-neutral description of the plasma with no constraints on the simulation parameters related to the Debye length but with the ability to perform local up-scalings with non neutral plasma descriptions. This brings a gain in the computational efficiency, since the discretization parameters can be set according to the physics of interest rather than the small scales (namely the Debye length) described by the model. \n\n\nThe methodology introduced in these former achievements is aimed to be generalized here to more singular limits. In this series of former works, the limit models remain kinetic and the scales of interest are related to the electron dynamics. For instance, the quasi-neutral limit of the Vlasov-Maxwell system investigated in \\cite{DegDelDoy} can be interpreted as a kinetic description of the Electron-Magneto-Hydro-Dynamic (E-MHD) \\cite{gordeev_electron_1994,swanekamp_particlecell_1996,cho_anisotropy_2004}, accounting for the electron inertia, the massive ions being assumed at rest or slowly evolving. In the present paper, the objective is to go beyond the kinetic E-MHD with the aim to bridge the Vlasov-Maxwell system and Magneto-Hydro-Dynamic (MHD) models. In MHD systems, the scales of interest are defined by the overall plasma dynamic which is governed by the ions, the fast scales associated to the electron inertia being filtered out from the equations. \n\n This preliminary work is therefore devoted to the derivation of a model hierarchy bridging either the Vlasov-Maxwell system and MHD models for magnetized plasmas or, the Vlasov-Poisson and the electron adiabatic response also referred to as Boltzmann relation (see \\cite{langmuir_interaction_1929,tonks_oscillations_1929,tonks_general_1929} for seminal works and \\cite{de_cecco_asymptotic_2017} for numerical investigations), for electrostatic frameworks. A wide range of applications of the present investigations can be named, specifically low variance Particle-In-Cell methods or more generally numerical discretization of kinetic models implementing a Micro-Macro decomposition of the distribution function. We refer for instance to \\cite{crestetto_kinetic\/fluid_2012,crouseilles_asymptotic_2011,dimarco_asymptotic_2014,lemou_new_2008}) for Micro-Macro methods, and to \\cite{degond_moment-guided_2011} for the moment Guided method; fluid-preconditioned fully implicit methods \\cite{chen_energy-_2011,chen_fluid_2014,chen_multi-dimensional_2015} and Asymptotic-Preserving numerical methods \\cite{Jin99,degond_asymptotic-preserving_2017}. Another application can be envisioned with the hybrid coupling of Particle-In-Cell methods and MHD descriptions \\cite{schumer_mhd--pic_2001,daldorff_two-way_2014} and more generally coupling strategies such as the Current-Coupling-Scheme (CCS) and the Pressure-Coupling-Scheme (PCS) (see \\cite{park_threedimensional_1992} \\cite{tronci_hybrid_2014} and the references therein). \n\n\n The aim of this work is to clarify how the asymptotic parameters interact with each other and define reduced models, but also, to relate these parameters to meaningful physical quantities. The MHD regime is sometimes derived by letting $\\varepsilon_0$, the vacuum permittivity, go to zero (see for instance \\cite{jang_derivation_2012,tronci_neutral_2015}) which is referred to as the full Maxwell to the low frequency pre-Maxwell's equations asymptotic in \\cite[see section~2.3.3]{freidberg_ideal_2014}. It is also common to let the electron to ion mass ratio go to zero to explain the vanishing of the electron inertia (\\cite{freidberg_ideal_2014,klingenberg_consistent_2016}) in deriving either MHD modelling or the Boltzmann relation. Although the right asymptotic models are recovered by this means, these assumptions do not account for changes in the system characteristics that may explain for a regime transition: the electron to ion mass ratio remains constant and the same property holds true for the vacuum permittivity.\n\n\nThe outlines of the paper are the following.\nThe plasma kinetic description is introduced in Sec.~\\ref{sec:eq:cont} together with the Maxwell system driving the evolution of the electromagnetic field. A dimensionless form of the system is stated in order to develop an asymptotic analysis and the derivation of reduced models. A hierarchy of quasi-neutral model is proposed in Sec.~\\ref{sec:fluid:hierarchy} for the Vlasov-Maxwell system. It encompasses fully kinetic, hybrid as well as single fluid (MHD) plasma descriptions. The electrostatic framework is investigated in Sec.~\\ref{sec:electrostatic}. The electrostatic limit of the Maxwell system is performed. A hierarchy of models, similar to that of the electromagnetic framework is derived. Finally, a synthesis of these asymptotic analysis is proposed in Sec.~\\ref{sec:conclusions} devoted to conclusions.\n\n\n\\section{The Vlasov-Maxwell system in a dimensionless form}\\label{sec:eq:cont}\n\\subsection{Introduction}\nIn this section, the purpose is to unravel a series of asymptotic limits bridging gap between the Vlasov-Maxwell system and a Magneto-Hydro-Dynamic (MHD) model. The difficulty is therefore to identify parameters explaining the transition from one description to the other and to relate these parameters to specific characteristics of the system. The tools mobilize to meet this aim are based on the asymptotic analysis of the Vlasov-Maxwell system. Since the low frequency plasma modelling is related to a fluid plasma description, the kinetic model is upgraded with collision operators. Therefore, the most refined modelling consists of a Valsov equation for the electrons and the ions, augmented with a collision operator and coupled to the Maxwell system. Even if the physical model is non collisional or weakly collisional, the transition towards a fluid limit is accounted for by a collisional process, thanks to a BGK operator. This choice of collision operator is questionable from a strict modelling view point, nonetheless, the purpose here is to easily derive the fluid limit at a limited computational cost. In this respect the BGK collision operator is a good candidate. \nFirst, the whole collisional processes are considered, including both inner and inter species collisions. Nonetheless, only the minimal collisional process will be accounted for to derive a MHD regime from the kinetic model. \nThis point will be outlined in the following sections. The introduction of non dimensional quantities will naturally reveal dimensionless parameters in the equations. Letting some of these parameters go to zero shapes the hierarchy of models derived for the Vlasov-Maxwell system and bridging the gap with MHD models.\n\\subsection{The Vlasov-BGK-Maxwell system}\n\\label{sec:model}\nThe most refined description of the plasma is constituted by two Vlasov equations, $f_i$ and $f_e$ being the ion and electron distribution functions\n\\begin{align}\n & \\partial_t f_i +v \\cdot \\nabla_x f_i + \\frac{q}{m_i} \\left( E + v \\times B\\right) \\cdot \\nabla_v f_i = \\mathcal{Q}_i \\,, \\\\\n & \\partial_t f_e +v \\cdot \\nabla_x f_e - \\frac{q}{m_e} \\left( E + v \\times B \\right)\\cdot \\nabla_v f_e = \\mathcal{Q}_e \\,.\n\\end{align}\nIn these equations, $q$ is the elementary charge, $m_\\alpha$ is the mass of the species $\\alpha$ ($\\alpha=e$ for the electrons and $i$ for the ions). The BGK collision operator $\\mathcal{Q}_\\alpha$ are given by \\cite{huba_nrl_2011}\n\\begin{equation}\n \\begin{split}\n \\mathcal{Q}_\\alpha &= \\mathcal{Q}_{\\alpha\\alpha}+ \\mathcal{Q}_{\\alpha\\beta} \\,,\\\\\n \\mathcal{Q}_{\\alpha\\alpha} = {\\nu_{\\alpha\\alpha}} \\left( \\mathcal{M}_{n_\\alpha,u_\\alpha,T_\\alpha} - f_\\alpha \\right)\\,, &\\qquad \n \\mathcal{Q}_{\\alpha\\beta}={\\nu_{\\alpha\\beta}} \\left( \\overline{\\mathcal{M}}_{n_\\alpha,\\overline{u}_\\beta,\\overline{T}_\\beta} - f_\\alpha \\right) \\,, \n \\end{split}\n\\end{equation}\n$\\nu_{\\alpha\\alpha}$ and $\\nu_{\\alpha\\beta}$ being the like-particle and inter species collision frequencies which can be defined as \\cite{degond_chapter_2007,spatschek_high_2012}\n\\begin{subequations}\\label{eq:Freq:Ordering}\n\\begin{alignat}{3}\n \\nu_{ii} &= K_0 \\, \\frac{n_i}{(k_B T_i)^\\frac{3}{2}}\\frac{\\sqrt{2}}{\\sqrt{m_i}}\\,, \\qquad \\nu_{ie} &&= K_0 \\, \\frac{n_e}{(k_B T_i)^\\frac{3}{2}}\\frac{\\sqrt{m_e}}{m_i}\\,, \\\\\n \\nu_{ee} &= K_0 \\, \\frac{n_e}{(k_B T_e)^\\frac{3}{2}}\\frac{\\sqrt{2}}{\\sqrt{m_e}}\\,, \\qquad \\nu_{ei}&&= K_0 \\, \\frac{n_i}{(k_B T_e)^\\frac{3}{2}}\\frac{1}{\\sqrt{m_e}} \\,, \n\\end{alignat} \nwhere, $C$ denotes a constant with a magnitude equal to one, $\\ln(\\Lambda)$ the Coulomb logarithm and the ions being assumed mono-charged, \n\\begin{equation}\n K_0 = C \\left(\\frac{q^2}{4 \\pi \\epsilon_0}\\right)^2 \\ln(\\Lambda) \\,.\n\\end{equation}\n\\end{subequations}\n\nThe Maxwellians $\\mathcal{M}_{n_\\alpha,u_\\alpha,T_\\alpha}$ and $\\overline{\\mathcal{M}}_{n_\\alpha,\\overline{u}_\\beta,\\overline{T}_\\beta}$ are defined as\n\\begin{subequations}\\label{eq:def:BGK}\n\\begin{align}\n \\mathcal{M}_{n_\\alpha,u_\\alpha,T_\\alpha} &= n_\\alpha(x,t) \\left(\\frac{ m_\\alpha}{2\\pi k_B T_\\alpha(x,t)}\\right)^\\frac{D_v}{2} \\exp\\left(-\\frac{m_\\alpha|u_\\alpha(x,t)-v |^2}{2 k_B T_\\alpha(x,t)} \\right) \\,,\\\\\n \\overline{\\mathcal{M}}_{n_\\alpha,\\overline{u}_\\beta,\\overline{T}_\\beta} &= n_\\alpha(x,t) \\left(\\frac{ m_\\alpha}{2\\pi k_B \\overline{T}_\\beta(x,t)}\\right)^\\frac{D_v}{2} \\exp\\left(-\\frac{m_\\alpha|\\overline{u}_\\beta(x,t)-v |^2}{2 k_B \\overline{T}_\\beta(x,t)} \\right) \\,, \\label{eq:BGK:inter}\n\\end{align}\n\\end{subequations}\n $D_v$ denoting the dimension of the velocity space, $k_B$ is the Boltzmann constant. The Maxwellian parameters are $n_\\alpha$, $u_\\alpha$ and $T_\\alpha$ the density, mean velocity and temperature associated to the distribution function $f_\\alpha$ and defined as\n\\begin{equation}\n n_\\alpha = \\int_{\\Omega_v} f_\\alpha dv \\,, \\qquad n_\\alpha u_\\alpha= \\int_{\\Omega_v} v \\, f_\\alpha \\, dv \\,, \\qquad \\frac{1}{\\gamma-1}n_\\alpha k_B T_\\alpha = \\frac{m_\\alpha}{2}\\int_{\\Omega_v} |v-u|^2\\, f_\\alpha \\, dv \\,,\n\\end{equation}\nwith $\\gamma$ the specific heat ratio whose value depends on the dimensionality of the velocity space $D_v$ through\n\\begin{equation}\n \\gamma - 1 = \\frac{2}{D_v} \\,.\n\\end{equation}\nThe collision operators verify the following conservation properties \n\\begin{subequations}\n \\begin{alignat}{3}\n &\\int \\mathcal{Q}_{\\alpha\\alpha} \\, m_\\alpha v^n \\, dv = 0 \\,, \\qquad &&n = 0,\\ldots,2\\,, \\\\\n &\\int \\mathcal{Q}_{\\alpha\\beta} \\, m_\\alpha v^n \\, dv + \\int \\mathcal{Q}_{\\beta\\alpha}\\, m_\\beta v^n\\, dv = 0 \\,, \\qquad &&n = 0,\\ldots,2 \\,. \\label{eq:prop:BGK:inter}\n \\end{alignat}\n\\end{subequations}\n\nThe temperature and the mean velocity $(\\overline{u}_\\beta,\\overline{T}_\\beta)$ in the inter-specie collision operator expression \\eqref{eq:BGK:inter} should be chosen with care in order to guarantee the total momentum and energy conservation. Indeed the following identities\n\\begin{subequations}\n\\begin{align}\n \\int \\mathcal{Q}_{\\alpha\\beta} \\,m_\\alpha v \\, dv &= \\nu_{\\alpha\\beta} m_\\alpha n_\\alpha \\left( \\overline{u}_\\beta - u_\\alpha \\right) \\,, \\\\\n \\int \\mathcal{Q}_{\\alpha\\beta} \\,m_\\alpha \\frac{v^2}{2} \\, dv &= \\nu_{\\alpha\\beta} \\left( \\frac{D_v}{2} n_\\alpha k_B \\left( \\overline{T}_\\beta - T_\\alpha \\right) + \\frac{1}{2}m_\\alpha n_\\alpha \\left(\\overline{u}^2_\\beta- u_\\alpha^2 \\right) \\right)\\,\n\\end{align}\n\\end{subequations}\nhold true for the operators defined by \\eqref{eq:def:BGK}. The trivial choice $(\\overline{u}_\\beta,\\overline{T}_\\beta)=({u}_\\beta,{T}_\\beta)$ do ensure the plasma total momentum conservation, provided that $\\nu_{ei} m_e n_e = \\nu_{ie} m_i n_i$. However in this case, the plasma total energy is not conserved. We refer to \\cite{greene_improved_1973} for a seminal work, as well as \\cite{klingenberg_consistent_2016} and the references therein for recent advances, on the choice of these parameters compliant with the desired properties \\eqref{eq:prop:BGK:inter} of the inter-specie collision operators.\n\n\nThe electromagnetic field $(E,B)$ evolution is driven by the Maxwell system:\n\\begin{align}\n& \\frac{1}{c^2}\\partial_t E - \\nabla_x \\times B= - \\mu_0 J\\,,\\label{eq:std:Amp}\\\\\n& \\partial_t B+ \\nabla_x \\times E = 0\\,,\\label{eq:std:Far}\\\\\n&\\nabla_x \\cdot E= \\frac{\\rho}{\\epsilon_0}\\,,\\label{eq:std:divE}\\\\\n& \\nabla_x \\cdot B = 0\\,,\\label{eq:std:divB}\n\\end{align}\nwhere $c$ is the speed of light, $\\mu_0$ the vacuum permeability and $\\epsilon_0$ the vacuum permittivity verifying $\\mu_0\\epsilon_0 c^2 = 1$. The Maxwell sources are the particle currents and densities\n\\begin{subequations}\n\\begin{align}\n \\rho &= q (n_i - n_e) \\,, \\\\\n J &= q (n_i u_i - n_e u_e) \\,.\n\\end{align} \n\\end{subequations}\n\n\nThe definition of the collision frequencies as stated by Eqs.\\eqref{eq:Freq:Ordering} relates different time scales. Indeed, because of their different masses, ions and electrons are not equally affected by collisions. This properties are more clearly emphasized working with dimensionless variables as proposed in the next section.\n\n\n\\subsection{Scaling of the Vlasov-Maxwell system}\\label{sec:scaling}\n\\label{sec-scaling}\n\n\nThe equations are written with dimensionless quantities in order to easily identify different regimes. The scaling is introduced under {\\it a priori} assumptions that the electronic and ionic temperatures, densities and mean velocities are comparable with a magnitude denoted $T_0$, $n_0$ and $u_0$. These scales define the typical Debye length as well as the electron plasma period \n\\begin{equation*}\n\\lambda_D=\\sqrt{\\frac{\\epsilon_0 k_B T_0}{q^2 n_0}}\\,, \\qquad \\tau_{pe}=\\sqrt{\\frac{m_e \\epsilon_0}{q^2 n_0}}\\,.\n\\end{equation*}\nWe denote by $x_0$ and $t_0$ the characteristic space and time scales of the phenomena observed, which yields to the velocity of interest $\\vartheta_0 = x_0\/t_0$. The magnitude of the thermal velocity for the specie $\\alpha$ is denoted $v_{0,\\alpha}$ with $v_{0,\\alpha}^2=k_B T_0\/m_\\alpha$. Due to the different masses, the thermal veolicity of the electron is not that of the ions. The reference thermal velocity $v_0$ will be defined by the ion one $v_{0}^2=k_B T_0\/m_i$, hence $v_{0,e}= v_0\/\\varepsilon$ and $v_{0,i}=v_0$, where $\\varepsilon^2=m_e\/m_i$. Finally, the particle current scale is defined as $J_0 = q n_0 u_0$.\nThe dimensionless variables are defined according to \n\\begin{gather*}\n x^* = \\frac{x}{x_0} \\,, \\quad t^* = \\frac{t}{t_0} \\,,\\quad v^* = \\frac{v}{v_{0,\\alpha}} \\,, \\quad f^* = \\frac{f}{n_0\/(v_{0,\\alpha})^{D_v}} \\,, \\quad n^* = \\frac{n}{n_0} \\,,\\quad J^* = \\frac{J}{q n_0 u_0} \\,, \\\\ E^* = \\frac{E}{E_0} \\,,\\quad B^* = \\frac{B}{B_0} \\,, \n\\end{gather*}\n the collision frequencies verifying\n\\begin{equation}\\label{eq:scaling:relation:freq}\n \\nu_{ee,0} = \\nu_{ei,0} = \\frac{1}{\\varepsilon} \\nu_{ii,0}\\,, \\qquad \\nu_{ie,0} = \\varepsilon \\nu_{ii,0}\\,, \\qquad \\varepsilon = \\sqrt{\\frac{m_e}{m_i}}\\,.\n\\end{equation}\nOn the fastest time scales, the electron distribution function relaxes towards a Maxwellian. On the same time scale, the electron mean velocity and temperature relax towards that of those of the ions. The relaxation of the ionic distribution function towards the local equilibrium is slower, by a factor $\\varepsilon^{-1}=\\sqrt{m_i\/m_e}$. Finally, the ions are almost unaffected by the collisions against the electrons. The relaxation of the ionic distribution function towards that of the electrons define the largest time scale, by a factor $\\varepsilon^{-1}$ compared to the the relaxation towards the thermodynamical equilibrium. \n\n\n\n\nThe dimensionless ionic and electronic Vlasov equations can be rewritten as (keeping the same notations for dimensionless variables): \n\\begin{align}\n \\begin{split}\n&\\xi \\partial_t f_i + v \\cdot \\nabla_x f_i + \\eta (E+ \\frac{\\beta}{\\xi} v \\times B)\\cdot \\nabla_v f_i =\\\\\n&\\hspace*{6cm} \\frac{ \\xi}{\\kappa} \\Big( \\nu_{ii} \\left(\\mathcal{M}_{n_i,u_i,T_i} - f_i\\right) + \\varepsilon \\nu_{ie} \\left( \\overline{\\mathcal{M}}_{n_i,u_e,T_e} - f_i \\right) \\Big) \\,,\\label{VMadim:a}\\\\ \n \\end{split} \\\\ \n \\begin{split}\n&\\xi \\varepsilon \\partial_t f_e + v \\cdot \\nabla_x f_e - \\eta(E+ \\frac{\\beta}{\\varepsilon \\xi} v \\times B)\\cdot \\nabla_v f_e = \\\\\n& \\hspace*{6cm} \\frac{ \\xi}{\\kappa} \\Big( \\nu_{ee} \\left(\\mathcal{M}_{n_e,u_e,T_e} - f_e\\right) + \\nu_{ei} \\left( \\overline{\\mathcal{M}}_{n_e,u_i,T_i} - f_e \\right) \\Big)\\,,\\label{VMadim:b}\\\\ \n \\end{split}\n\\end{align}\ntogether with the dimensionless Maxwell system writting\n\\begin{subequations}\\label{eq:Maxwell:adim}\n\\begin{align}\n& \\lambda^2 \\eta (\\alpha^2\\frac{\\partial {E}}{\\partial t} - \\beta \\nabla_x \\times B)=-\\alpha^2 \\frac{M}{\\xi} J\\,,\\label{eq:Maxwell:adim:Ampere}\\\\\n& \\beta \\partial_t B + \\nabla_x \\times E = 0\\,,\\label{VMadim:c}\\\\\n&\\lambda^2 \\eta \\nabla_x \\cdot E= n_i - n_e\\,,\\\\\n& \\nabla_x \\cdot B = 0\\,,\\label{VMadim:e}\\\\\n& J = n_i u_i - n_e u_e \\,.\n\\end{align}\n\\end{subequations}\nThis system is written thanks to the following dimensionless parameters\n\\begin{equation}\n\\left\\{ \\begin{array}[c]{l}\n \\displaystyle \\varepsilon^2=\\frac{m_e}{m_i} \\text{ the ratio of the electronic and ionic masses}\\,,\\\\[3mm]\n \\displaystyle \\lambda= \\frac{\\lambda_D}{x_0} \\text{ the scaled Debye length}\\,,\\\\[3mm]\n \\displaystyle M = \\frac{u_0}{v_{0}} \\text{ the ionic Mach number, with } v_{0} = \\sqrt{\\frac{k_BT_0}{m_i}} \\text{the ionic speed of sound} \\,,\\\\[3mm]\n \\displaystyle \\xi=\\frac{\\vartheta_0}{v_0} \\text{ the ratio of the typical velocity to the ionic speed of sound}\\,, \\\\[3mm]\n \\displaystyle \\alpha=\\frac{\\vartheta_0}{c} \\text{ the ratio of the typical velocity to the speed of light}\\,, \\\\[3mm]\n \\displaystyle \\eta=\\frac{q x_0 E_0}{k_B T_0} \\text{ the ratio of the electric and plasma internal energies} \\,, \\\\[3mm]\n \\displaystyle \\beta=\\frac{\\vartheta_0B_0}{E_0} \\text{ the induced electric field relative to the total electric field}\\,,\\\\[3mm] \n \\kappa^{-1} = \\nu_{ii,0} t_0 \\text{ the number of ion-ion collisions during the typical time}\\,.\n \\end{array} \\right.\n\\end{equation}\n\nThe dimensionless Maxwellians are defined by\n\\begin{subequations}\\label{eq:def:Maxwellians}\n\\begin{align}\n \\mathcal{M}_{n_e,u_\\alpha,T_\\alpha} = \\overline{\\mathcal{M}}_{n_e,u_\\alpha,T_\\alpha} &= n_e(x,t) \\left(\\frac{1}{2\\pi T_\\alpha(x,t)}\\right)^\\frac{D_v}{2} \\exp\\left(-\\frac{|M\\varepsilon u_\\alpha(x,t)-v |^2}{2T_\\alpha(x,t)} \\right) \\,,\\\\\n \\mathcal{M}_{n_i,u_\\alpha,T_\\alpha} = \\overline{\\mathcal{M}}_{n_i,u_\\alpha,T_\\alpha} &= n_i(x,t) \\left(\\frac{1}{2\\pi T_\\alpha(x,t)}\\right)^\\frac{D_v}{2} \\exp\\left(-\\frac{|M u_\\alpha(x,t)-v |^2}{2T_\\alpha(x,t)} \\right) \\,.\n\\end{align} \n\\end{subequations}\n\nSome comments can be stated regarding the meaning of these parameters and the scaling relations. \n\nThe typical mean velocity and temperature are assumed to be the same for the electrons and the ions. Accordingly, the relaxation of the electron mean velocity and temperature towards that of the ions may be assume to marginally contribute to the evolution of the system. This assumption is therefore consistent with the investigation of resistive-less plasma modellings and the neglect of the inter-species collisions.\n\n\nThe parameter $\\xi$ is intended to provide a measure of how the electronic and ionic dynamics are resolved. The choice $\\xi=1$ means that the system is assumed to evolve at a speed comparable to the ionic thermal velocity $v_0$, while $\\xi\\varepsilon=1$ performs a rescaling of this typical velocity to the electron microscopic velocity. Setting $\\xi =M$ relates the typical speed of the system to the ionic mean velocity $u_0$. Actually, the Mach number measures the gap between the microscopic (thermal) and macroscopic velocity scales.\n\nThe scaling relation $\\eta =1$ is generally assumed in single fluid plasma representation. The plasma internal energy is then on a par with the electric energy. This equilibrium is fundamental in the derivation of the Boltzmann relation. The identity $\\beta M = \\xi$ is also common in single-fluid plasma models. This amounts to assume that the induced electric field scales as the product of the plasma mean velocity and the typical magnetic field $E_0= u_0 B_0$. In other words, the magnetic field is essentially transported with the plasma flow. This later assumption is in line with the Alfven's frozen theorem \\cite{moreau_magnetohydrodynamics_1990,freidberg_ideal_2014,davidson_introduction_2001,schnack_lectures_2009} characteristic of ideal MHD models: the magnetic field is frozen into the plasma and transported by its flow. \n\nThe derivation of reduced models consists in identifying small dimensionless parameters and let them go to zero. The smallness of the scaled Debye length refers to a typical space scale much larger than the physical Debye length. This means that it assumed that the charge separations occurring on space scales comparable to the Debye length are not important to explain the evolution of the system. Sending the scaled Debye length to zero performs a low frequency filtering into the equation deriving thus a quasi-neutral model.\nIn the context of the derivation of numerical methods, the typical length relates to the mesh size. This outlines the advantage of reduced models: the low frequency filtering operated by vanishing small parameters permits to derive numerical methods with discretization parameters (mesh size and time step) unconstrained by the small scales filtered out from the original equations.\n\n\n\\section{A hierarchy of quasi-neutral models bridging the Vlasov-Maxwell system and the Hall-MHD regime}\\label{sec:fluid:hierarchy}\n\n\\subsection{Handling the fluid and quasi-neutral limits}\n\n\\subsubsection{Introduction}\nThe aim here is to reduce the number of free dimensionless parameters deriving by this means different reduced models well suited for the description of low frequency phenomena. \nAs depicted in Fig.~\\ref{fig:hierarchy}, the starting point of this hierarchy of models implements the minimal upgrades of the Vlasov-Maxwell system to recover a MHD regime. Precisely the only inter species collisions are taken into account in the initial model in order for the distribution function to relax towards the local equilibrium. This yields\n\\begin{subequations}\\label{eq:sys:Boltzmann}\n\\begin{align}\n \\begin{split}\n&\\xi \\partial_t f_i + v \\cdot \\nabla_x f_i + \\eta (E+ \\frac{\\beta}{\\xi} v \\times B)\\cdot \\nabla_v f_i =\\frac{ \\xi}{\\kappa}\\nu_{ii} \\left(\\mathcal{M}_{n_i,u_i,T_i} - f_i\\right) \\,,\\label{VMadimBis:a}\\\\ \n \\end{split} \\\\ \n \\begin{split}\n&\\xi \\varepsilon \\partial_t f_e + v \\cdot \\nabla_x f_e - \\eta(E+ \\frac{\\beta}{\\varepsilon \\xi} v \\times B)\\cdot \\nabla_v f_e =\\frac{ \\xi}{\\kappa} \\nu_{ee} \\left(\\mathcal{M}_{n_e,u_e,T_e} - f_e\\right) \\,,\\label{VMadimBis:b}\\\\ \n \\end{split}\n\\end{align}\n\\end{subequations}\nfor the evolution of the ions and electrons coupled to the dimensionless Maxwell system defined by Eqs.~\\eqref{eq:Maxwell:adim}.\n\nFrom the scaling relations stated by Eq.~\\eqref{eq:scaling:relation:freq}, discarding the inter-species collisions make sense for the ions. Due to their large mass, the ions are almost unaffected by encounters with electrons. \nFor the electrons, this assumption is not in line with the scaling of the like and inter-species collision frequencies. However, the purpose here, is to propose a physically meaningful framework to clarify the foundation of a numerical method bridging the gap between a kinetic description of a weakly (or non) collisional magnetized plasma with a MHD regime. The interspecies collisions give rise to the resistivity in the macroscopic system which is not the targeted class of modelling for this work. \n\n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=0.7\\textwidth]{VMtoMHD-2-pics.pdf}\n \\caption{Fluid and kinetic (quasi-neutral) model hierarchies derived from the Vlasov-BGK-Maxwell system.}\n \\label{fig:hierarchy}\n\\end{figure}\n\\subsubsection{Handling the fluid limit}\nTo identify easily a fluid regime, the distribution function is decomposed into a Maxwellian $\\mathcal{M}_{n_\\alpha,u_\\alpha,T_\\alpha} $ and a deviation from this Maxwellian ${\\kappa} \\, g_\\alpha$ according to\n \\begin{subequations}\\label{def:micro-macro}\n\\begin{equation}\n f_\\alpha = \\mathcal{M}_{n_\\alpha,u_\\alpha,T_\\alpha} + {\\kappa} \\, g_\\alpha \\,,\n\\end{equation}\nthe deviation verifying \n\\begin{equation}\n \\big< v^n g_\\alpha \\big> = \\int_{\\Omega_v} v^n g_\\alpha \\, dv =0\\,, \\quad n=0,1,2\\,.\n\\end{equation} \n \\end{subequations}\nWith this decomposition, the Vlasov-Boltzmann equations \\eqref{eq:sys:Boltzmann} can be recast into a hydrodynamic set of equations with kinetic corrections, depending on the moment of the deviation $g_\\alpha$, yielding\n\\begin{subequations}\\label{eq:Sys:MM:ions}\n\\begin{align}\n &\\frac{\\xi}{M} \\partial_t n_i + \\nabla_x \\cdot (n_i u_i) = 0 \\label{eq:cons:ni}\\\\\n \\begin{split}\n &M^2\\left(\\frac{\\xi}{M} \\partial_t (n_i u_i) + \\nabla_x \\cdot (n_iu_i \\otimes u_i) \\right)+ \\nabla_x p_i - \\eta \\, n_i \\big( E + \\frac{\\beta M}{\\xi} u_i \\times B \\big) =\\\\\n & \\hspace*{11cm}- {\\kappa}\\nabla_x \\cdot \\left< v \\otimes v g_i \\right> \\,, \n \\end{split} \\\\\n& \\frac{\\xi}{M} \\partial_t W_i + \\nabla_x \\cdot \\big((W_i + p_i)u_i \\big) - \\eta n_i E\\cdot u_i = -\\frac{\\kappa}{M} \\nabla_x \\cdot \\left< \\frac{|v|^2}{2} v g_i\\right>\\,.\n\\end{align}\nwith \n\\begin{equation}\n W_i = (M)^2 \\frac{1}{2} n_i u_i^2 + \\frac{p_i}{\\gamma-1} \\,, \\qquad p_i = n_i T_i \\,,\n\\end{equation}\n\\end{subequations}\nfor the ions, and an equivalent system for the electrons,\n\\begin{subequations}\\label{eq:Sys:MM:electrons}\n\\begin{align}\n &\\frac{\\xi}{M} \\partial_t n_e + \\nabla_x \\cdot (n_e u_e) = 0 \\label{eq:cons:ne}\\\\\n \\begin{split}\n &\\left(M\\varepsilon\\right)^2\\left(\\frac{\\xi}{M} \\partial_t (n_e u_e) + \\nabla_x \\cdot (n_eu_e \\otimes u_e) \\right)+ \\nabla_x p_e + \\eta\\, n_e \\big( E + \\frac{\\beta M}{\\xi}u_e \\times B \\big) =\\\\\n & \\hspace*{11cm}- {\\kappa}\\nabla_x \\cdot \\left< v \\otimes v g_e \\right> \\,, \n \\end{split} \\\\\n& \\frac{\\xi}{M} \\partial_t W_e + \\nabla_x \\cdot \\big((W_e + p_e)u_e \\big) + \\eta n_e E\\cdot u_e = -\\frac{\\kappa}{M\\varepsilon} \\nabla_x \\cdot \\left< \\frac{|v|^2}{2} v g_e\\right>\\,.\n\\end{align}\nwith \n\\begin{equation}\n W_e = (M\\varepsilon)^2 \\frac{1}{2} n_e u_e^2 + \\frac{p_e}{\\gamma-1}\\,, \\quad p_e = n_e T_e\\,.\n\\end{equation}\n\\end{subequations}\nThese two systems are coupled to a set of equations (the Maxwell system \\eqref{eq:Maxwell:adim}) driving the changes in the electromagnetic field.\n\n\n\\subsubsection{On the quasi-neutral limit}\nOmitting the collisions, the fastest velocity in this system is the propagation of waves at the speed of light described by the Maxwell system. The Debye length as well as the plasma period also define small space and time scales for large plasma densities.\nThe quasi-neutral limit is defined by the following scaling relations:\n\\begin{equation}\\label{eq:def:QN:limit}\n (\\alpha,\\lambda) \\to 0\\,, \\quad {\\alpha}\\sim{\\lambda} \\,.\n\\end{equation}\nThis amounts to assume that the scaled Debye length is small compared to the typical length and that the system evolves at a speed lower than the speed of light. By this means, the small scales related to these parameters are filtered out of the equations. The last hypothesis $\\alpha\\sim \\lambda$ is essential to recover the low frequency Ampere's law, derived by neglecting the displacement current. This equation being common to Magneto-Hydro-Dynamic models, the quasi-neutral limit encompasses this two assumptions. With the vanishing of this generalised dimensionless Debye length $(\\lambda,\\alpha)\\to 0$, the Maxwell system degenerates into\n\\begin{subequations}\\label{eq:Maxwell:Deg}\n\\begin{align}\n& \\beta \\nabla_x \\times B = \\frac{M}{\\xi} J\\,,\\label{eq:Maxwell:Deg:Ampere}\\\\\n& \\beta \\partial_t B + \\nabla_x \\times E = 0\\,,\\label{eq:Maxwell:Deg:Faraday}\\\\\n& n_i = n_e\\,,\\\\\n& \\nabla_x \\cdot B = 0\\,.\\label{VMadim:e}\n\\end{align}\n\\end{subequations}\nFrom Gauss's law, the property of the electronic density to match that of the ions is recovered, which genuinely enforces the quasi-neutrality of the plasma. The electric field has no contribution in both these degenerate Gauss and the Ampere equations. The remaining occurrence of the electric field is limited to the Faraday equation \\eqref{eq:Maxwell:Deg:Faraday}. Therefore, \nthis set of equations is not well suited for the computation of the electric field. Indeed, the electrostatic component of the electric field can be arbitrarily chosen in Eqs.~\\eqref{eq:Maxwell:Deg}. \n\nIn the quasi-neutral limit, the electric field is provided by the particle current $J$ rather than displacement current ($\\partial E\/\\partial t$ originally present in Ampere's law). To close the system, the dependence of $J$ with respect to $E$ shall be explained to restore uniqueness of the electric field. This is related to the model describing the plasma. \n\n\n\\subsection{A hierarchy of kinetic models for quasi-neutral plasmas}\n\n\\subsubsection{A kinetic formulation of the Electron-MHD}\\label{sec:kinetic:EMHD}\nThe aim here is to follow the microscopic dynamic of the electrons. The velocity of interest is the kinetic velocity of the electrons. This amounts to set $\\vartheta_0 = v_0\/\\varepsilon$ or equivalently $\\xi \\varepsilon = 1$, yielding\n\\begin{subequations}\\label{VMadimEMHD}\n\\begin{align}\n \\begin{split}\n&\\partial_t f_i + \\varepsilon \\left(v \\cdot \\nabla_x f_i + \\eta (E+ {\\varepsilon\\beta} v \\times B)\\cdot \\nabla_v f_i\\right) =\\frac{1}{\\kappa}\\nu_{ii} \\left(\\mathcal{M}_{n_i,u_i,T_i} - f_i\\right) \\,,\\label{VMadimEMHD:a}\\\\ \n \\end{split} \\\\ \n \\begin{split}\n& \\partial_t f_e + v \\cdot \\nabla_x f_e - \\eta(E+ {\\beta} v \\times B)\\cdot \\nabla_v f_e =\\frac{1}{\\varepsilon \\kappa} \\nu_{ee} \\left(\\mathcal{M}_{n_e,u_e,T_e} - f_e\\right) \\,,\\label{VMadimEMHD:b}\\\\ \n \\end{split}\n \\end{align}\n \\end{subequations}\nThe collisions are assumed to be ineffective on the characteristic time scale: $\\kappa\\varepsilon\\gg{1}$ which amounts to neglect the collision operator in Eqs.~\\eqref{VMadimEMHD}, in particular for the ions, owing to $\\varepsilon \\ll 1$. \n\nAt these scales, the ions may be considered at rest.\nPerforming the quasi-neutral limit $(\\lambda=\\alpha)\\to 0$, the system at hand here is recast into (see \\cite{DegDelDoy})\n\\begin{align}\n&\\partial_t f_e + v \\cdot \\nabla_x f_e -\\eta (E+\\beta v \\times B)\\cdot \\nabla_v f_e =0\\,,\\label{QNVM1}\\\\\n& \\beta \\nabla_x \\times B=(M\\varepsilon)J\\,,\\label{QNVM2}\\\\\n& \\beta\\partial_t B + \\nabla_x \\times E = 0\\,,\\label{QNVM3}\\\\\n&n_e= n_i=n\\,,\\label{QNVM4}\\\\\n& \\nabla_x \\cdot B = 0\\,.\\label{QNVM5}\n\\end{align}\nFirst, note that the formal time derivative of the Faraday equation \\eqref{QNVM3} together with the curl of Ampere's law yields\n \\begin{equation}\\label{eq:ampere:farraday:0}\n \\nabla_x \\times \\nabla_x \\times E = - (M \\varepsilon) \\partial_t J \\,,\n \\end{equation}\nwhich outlines that the electric field is known up to the gradient of a potential in this system. \nIn \\cite{DDSprep,DegDelDoy} the ill-posed nature of this equation is corrected by explaining the relation between the current density and the electric field. The first moment of Eq.~\\eqref{QNVM1} yields the conservation of the electronic momentum, with\n\\begin{subequations}\n\\begin{equation}\\label{momentJ}\n (M \\varepsilon) \\frac{\\partial}{\\partial t} (n u_e) = - \\nabla_x \\cdot \\mathbb{S} - \\eta \\big( n E + \\beta (M \\varepsilon) n u_e \\times B \\big)=0\\,, \\qquad \n\\end{equation}\nwith $n=n_e=n_i$ and\n\\begin{equation}\n \\mathbb{S}= \\int \\left(v \\otimes v \\right)\\, f_e \\, dv \\,.\n\\end{equation}\n\\end{subequations}\nInserting this identity into Eq.~{\\eqref{eq:ampere:farraday:0} gives \n\\begin{equation}\\label{eq:ampere:QN}\n \\eta n E + \\nabla_x \\times \\nabla_x \\times E = \\eta \\beta (M\\varepsilon)J\\times B - \\nabla_x \\cdot \\mathbb{S} .\n\\end{equation}\nThis equation is well posed in the quasi-neutral limit ($n>0$) and can be used for the computation of the electric field. This yields the following definition of the quasi-neutral model\n\\begin{subequations}\\label{eq:def:sys:EMHD}\n\\begin{align}\n&\\partial_t f_e + v \\cdot \\nabla_x f_e -(E+ v \\times B)\\cdot \\nabla_v f_e =0\\,,\\label{QN1bis}\\\\\n& n E + \\nabla_x \\times \\nabla_x \\times E = (M\\varepsilon)J\\times B - \\nabla_x \\cdot \\mathbb{S}\\,,\\label{QN2bis}\\\\\\\n&\\partial_t B + \\nabla_x \\times E = 0\\,,\\label{QN3bis}\\\\\n& \\nabla_x \\cdot B = 0\\,.\\label{QN5bis}\n\\end{align}\n\\end{subequations}\nThis model is written under the assumption $\\eta=1$ meaning that the plasma thermal energy is on a par with the electric energy together with $\\beta=1$.\n\n\nNote that the electric field provided by Eq.~\\eqref{QN2bis} enforces a divergence free particle current, or more precisely $\\partial_t (\\nabla \\cdot J) = 0$. This yields, thanks to the continuity equation:\n\\begin{equation*}\n\\frac{\\partial^2 \\rho}{\\partial t^2} = 0 \\,.\n\\end{equation*} \nThis proves the consistency of this model with the quasi-neutrality assumption (matching of the electronic and ionic densities) as soon as the initial data are compliant with this regime. The evolution of the ions may be taken into account thanks to another Vlasov equation. This brings corrections in deriving Eq.~\\eqref{QN2bis} accounting for the contribution of the ionic particle current.\n\n\nThe characteristics of this models are similar to the so-called Electron MHD: the time scale of interest is that of the electrons, the ions merely creating a motionless background for the fast electron flows \\cite{kingsep_electron_1990}. In particular, this modelling accounts for the inertia of electrons. A noticeable difference with the Electron-MHD (see Sec.~\\ref{Sec:EMHD}) lies in the kinetic description of the plasma. An asymptotic-Preserving method is proposed in \\cite{DegDelDoy} to bridge this quasi-neutral model and the Vlasov-Maxwell system. The properties of this quasi-neutral plasma description are investigated in \\cite{tronci_neutral_2015} by means of a linear stability analysis.\n\n\n \n\\subsubsection{A hybrid formulation of the Hall-MHD}\\label{sec:hybrid}\nHybrid modelling \\cite{buchner_hybrid_2003,yin_hybrid_2002,tronci_hybrid_2014} refers to a class of plasma models where the ions are described by a kinetic equation while the fluid limit is assumed for the electrons. This is in line with the scaling relations of the collision frequencies stated by Eqs.~\\eqref{eq:scaling:relation:freq}. The relaxation of the electronic distribution function towards the local equilibrium is indeed faster than for the ions. The aim of these modelling is to filter out of the equations the fast scales carried by the electron dynamics. Therefore, a zero inertia regime is also assumed for the electrons together with the fluid limit and the quasi-neutrality of the plasma.\n\nThe typical velocity selected here is the microscopic (thermal) velocity of the ions. This translates into the identity $\\xi = 1$ resulting in the following system for the plasma:\n\\begin{subequations}\\label{VMadim:Hybrid}\n\\begin{align}\n \\begin{split}\n&\\partial_t f_i + v \\cdot \\nabla_x f_i + \\eta (E+ \\frac{\\beta}{\\xi} v \\times B)\\cdot \\nabla_v f_i =\\frac{1}{\\kappa}\\nu_{ii} \\left(\\mathcal{M}_{n_i,u_i,T_i} - f_i\\right) \\,,\\label{VMadim:Hybrid:a}\\\\ \n \\end{split} \\\\ \n \\begin{split}\n& \\partial_t f_e + \\frac{1}{\\varepsilon}\\left( v \\cdot \\nabla_x f_e - \\eta(E+ \\frac{\\beta}{\\varepsilon \\xi} v \\times B)\\cdot \\nabla_v f_e \\right)=\\frac{1}{\\varepsilon \\kappa} \\nu_{ee} \\left(\\mathcal{M}_{n_e,u_e,T_e} - f_e\\right) \\,,\\label{VMadim:Hybrid:b}\\\\ \n \\end{split}\n \\end{align}\n \\end{subequations} \nThe fluid limit for the electrons is selected assuming $(\\varepsilon \\kappa) \\ll 1$ meaning that the number of electron collisions during the typical time is large. The quasi-neutrality of the plasma amounts to set $\\lambda=\\alpha\\ll1$.\nTo overcome the degeneracy of the Maxwell system in the quasi-neutral limit, the electronic momentum is harnessed to provide the so-called generalised Ohm's law. The electronic system can be recast into\n\n\\begin{align*}\n\\begin{split}\n &\\left(M\\varepsilon\\right)^2\\left(\\frac{1}{M} \\partial_t (n_e u_e) + \\nabla_x \\cdot (n_eu_e \\otimes u_e) \\right)+ \\nabla_x p_e + \\eta\\,\\Big( n_e E + (\\beta M) u_e\\times B\\Big) = \\\\\n & \\hspace*{9.5cm }- (\\kappa (M\\varepsilon)) \\nabla_x \\cdot \\sigma_e \\,, \n \\end{split}\\\\\n \\begin{split}\n& \\frac{1}{M} \\partial_t W_e + \\nabla_x \\cdot \\Big((W_e + p_e)u_e \\Big) + \\eta n_e E\\cdot u_e = \n-\\frac{\\kappa}{M\\varepsilon}\\nabla_x \\cdot \\Big( \\left(M\\varepsilon \\right)^2\\sigma_e \\cdot u_e + \\mu_e \\nabla_x T_e \\Big)\\,;\n\\end{split} \n\\end{align*}\nwith, owing to the quasi-neutrality assumption, $n_e = n_i = n$.\n\nThe dynamic described by these equations is stiff, this is due to the smallness of $(\\xi\\varepsilon)$ in this regime: the thermal velocity of the ions (defined as the typical velocity) is small compared to that of the ions. Therefore, the electrons are in a low Mach regime. Assuming $(M\\varepsilon)\\ll 1$ gives rise to the following equilibria\n\\begin{subequations}\\label{sys:forces:equilibria}\n\\begin{align}\n &\\nabla_x (n T_e) + \\eta\\,n \\Big( E + (\\beta M) u_e\\times B\\Big) = 0 \\,, \\\\\n & \\nabla_x T_e = 0 \\,.\n \\end{align}\n\\end{subequations}\n\n\nThe classical massless approximation for the electrons is recovered with the generalised Ohm's law and a homogeneous electronic temperature. The definition of the mean velocity $u_e$ is derived from the particle current density $ J= n (u_i - u_e)$ together with Amp\\`ere's law \\eqref{QNVM2}, yielding \n\\begin{equation}\\label{eq:subs:ue:ui:j}\n u_e = u_i - \\frac{\\beta}{M}\\frac{\\nabla_x \\times B}{n} \\,,\n\\end{equation}\n\nThe hybrid plasma modelling writes (assuming $\\eta=\\beta=1$)\n\\begin{subequations}\n\\begin{align}\n&\\partial_t f_i + v \\cdot \\nabla_x f_i + (E+ v \\times B)\\cdot \\nabla_v f_i =0 \\,,\\\\ \n& E = - M u \\times B+ M \\frac{\\nabla_x \\times B}{n}\\times B - T_e \\frac{\\nabla_x n}{n}\\,,\\\\\n&\\frac{\\partial B}{\\partial t} + \\nabla_x \\times E = 0 \\,, \\qquad \\nabla_x \\cdot B = 0 \\,,\n\\end{align}\nwith\n\\begin{align}\n n = \\int f_i \\, dv \\,, \\quad n u = \\int v f_i \\, dv \\,.\n\\end{align}\n\\end{subequations}\nThe derivation of a similar model is proposed in \\cite{acheritogaray_kinetic_2011} with numerical investigations in \\cite{degond_simulation_2011}. \n\n\\subsection{A fluid hierarchy of quasi-neutral models}\n\n\\subsection{The Electron MHD-system}\\label{Sec:EMHD}\nThis model is obtain by letting $\\kappa \\to 0$ in Eqs.~\\eqref{eq:sys:Boltzmann}. This yields the following set of equations for the electrons\n\\begin{subequations}\\label{eq:Sys:Bifluide:electrons}\n\\begin{align}\n &\\frac{\\xi}{M} \\partial_t n_e + \\nabla_x \\cdot (n_e u_e) = 0 \\label{eq:bifluid:ne}\\\\\n \\begin{split}\n &\\left(\\frac{\\xi}{M} \\partial_t (n_e u_e) + \\nabla_x \\cdot (n_eu_e \\otimes u_e) \\right)+ \\frac{1}{(M\\varepsilon)^2} \\left( \\nabla_x p_e + \\eta\\, n_e ( E + \\frac{\\beta M}{\\xi}u_e \\times B ) \\right) =0\\,, \n \\end{split} \\\\\n& \\frac{\\xi}{M} \\partial_t W_e + \\nabla_x \\cdot \\big((W_e + p_e)u_e \\big) + \\eta n_e E\\cdot u_e = 0\\,.\n\\end{align}\n\\end{subequations}\nA similar system is derived for the ions however with $\\varepsilon=1$ and $\\eta$ replaced by $-\\eta$. These two sets of conservation laws are coupled to the Maxwell system \\eqref{eq:Maxwell:adim}. Performing the quasi-neutral limit in this system $(\\lambda=\\alpha)\\to 0$ and focusing on the electronic dynamics with $\\xi\\varepsilon=1$ yields the quasi-neutral bi-fluid Euler-Maxwell system also referred to as Electron-MHD system. \nThis model is similar to that of Sec.~\\ref{sec:kinetic:EMHD} but with a fluid description for the plasma. It is implemented and investigated numerically in the framework of Asymptotic-Preserving methods in \\cite{DDSprep}.\n\n\n\\subsubsection{The Hall-MHD regime}\\label{sec:Hall:MHD}\nThe Hall-MHD regime (see \\cite{lighthill_studies_1960,witalis_hall_1986,schnack_lectures_2009}) is recovered from the assumptions of the precedent section but with a typical velocity equal to the plasma mean flow yielding $\\xi=M$. The fast electronic dynamics is filtered out from the equations to provide a low frequency modelling for the plasma driven by the evolution of the massive ions. The plasma velocity, denoted $u$ is defined as that of the heavy species $u=u_i$.\nThe other parameters obey the classical scaling relations of MHD models: $\\eta=1$ and $\\beta=1$. \n\\begin{comment}\nDenoting $n=n_i=n_e$, the Electron-MHD model writes\n\\begin{subequations}\\label{eq:Sys:EMHD}\n\\begin{align}\n & \\partial_t n + \\nabla_x \\cdot (n u_i) = 0 \\label{eq:bifluid:ni}\\\\\n \\begin{split}\n &\\partial_t (n u_i) + \\nabla_x \\cdot (n u_i \\otimes u_i) + \\frac{1}{M^2} \\Big( \\nabla_x p_i - n ( E + u_i \\times B ) \\Big) =0\\,, \n \\end{split} \\\\\n& \\partial_t W_i + \\nabla_x \\cdot \\big((W_i + p_i)u_i \\big) + n E\\cdot u_i = 0\\,.\n\\end{align}\n\n\\begin{align} \n& \\partial_t W_e + \\nabla_x \\cdot \\big((W_e + p_e)u_e \\big) + n E\\cdot u_e = 0\\,,\n\\end{align}\nThe equation driving the evolution of the particle current is derived from the conservation of the electronic and ionic momentum, yielding\n\\begin{equation}\n \\partial_t J + \\nabla_x \\cdot S + \\frac{1}{M^2} \\left( \\nabla_x \\Big(p_i - \\frac{p_e}{\\varepsilon^2} \\Big) - n E\\Big(1+\\frac{1}{\\varepsilon^2}\\Big) - n \\Big(u_i + \\frac{1}{\\varepsilon^2}u_e\\Big) \\times B ) \\right) =0\\,. \n \\end{equation} \nwith\n\\begin{equation}\nJ = n (u_i - u_e) \\,, \\qquad S = n \\left( u_e \\otimes u_e - u_i \\otimes u_i \\right)\\,.\n\\end{equation}\nThe electromagnetic field obeys \n\\begin{align}\n\\begin{split}\n& -\\frac{1}{M^2} n E\\Big(1+\\frac{1}{\\varepsilon^2}\\Big) + \\nabla_x \\times \\nabla_x \\times E = \\\\\n &\\hspace*{3cm}- \\nabla_x \\cdot S - \\frac{1}{M^2} \\left( \\nabla_x \\Big(p_i - \\frac{p_e}{\\varepsilon^2} \\Big) - n \\Big(u_i + \\frac{1}{\\varepsilon^2}u_e\\Big) \\times B ) \\right)\\,,\n\\end{split}\\\\\n& \\partial_t B + \\nabla_x \\times E = 0\\,, \\qquad \\nabla_x \\cdot B = 0\\,.\n\\end{align}\n\\end{subequations}\n\nRemark that in this system the electron are in a low Mach regime when the plasma mean flow is small compared to the electronic speed of sound $(M\\varepsilon \\ll 1)$. Though the small scales related to the charge separation (Debye length and plasma period) are not described by this quasi-neutral model, the fast electronic dynamic is not completely filtered out from these equations.\n\\end{comment}\n\n\n\n\nIn the drift regime ($M\\varepsilon\\to0)$, the electronic energy reduces to the internal energy \n\\begin{subequations}\n \\begin{equation}\n \\mathcal{E}_e= p_e\/(\\gamma-1)\\,,\n \\end{equation}\n with the electronic momentum and energy verifying\n \\begin{align}\n &\\nabla_x p_e = - n \\left(E + u_e \\times B \\right)\\,,\\label{eq:qe:QN}\\\\\n &\\partial_t \\mathcal{E}_e + \\nabla_x \\cdot \\big( (\\mathcal{E}_e+p_e) u_e \\big) = - n E\\cdot u_e \\,.\\label{eq:We:QN}\n \\end{align}\n \\end{subequations}\nThe generalysed Ohm's law \\eqref{eq:qe:QN} is harnessed to compute the electric field. The electronic velocity $u_e$ is substituted by $u_e=u - J\/n$.\n\n\nThe electric field is computed thanks to the generalized Ohm's law, giving rise to\n\\begin{equation}\\label{eq:Ohm:law}\n E = - u \\times B+ \\frac{\\nabla_x \\times B}{n}\\times B - \\frac{\\nabla_x p_e}{n}\\,.\n\\end{equation}\nThe first term of the right hand side of the equation \\eqref{eq:Ohm:law} is the classical frozen field term, explaining the convection of the magnetic field together with the plasma . The second and third contributions are the so-called Hall and diamagnetic terms.\nInserting this definition in the Faraday law \\eqref{eq:Maxwell:Deg:Faraday}, the magnetic induction equation can be constructed, with\n\\begin{equation}\\label{eq:magnetic:induction:Hall}\n \\partial_t B + \\nabla_x \\cdot (u \\otimes B - B \\otimes u) = - \\nabla_x \\times \\left(\\frac{\\nabla_x \\times B}{n}\\times B \\right) + \\nabla_x \\times\\left( \\frac{\\nabla_xp_e}{n}\\right)\\,.\n\\end{equation}\n\nFinally the plasma mass density, momentum and total pressure $p$ and energy $W$ defined by \n\\begin{subequations}\n\\begin{align}\n p &= p_i +p_e = n (T_i+T_e) \\,,\\\\\n W &= W_i + \\mathcal{E}_e = (M)^2 \\frac{1}{2} n u^2 + \\frac{p}{\\gamma-1} \\,,\n\\end{align}\nverify\n\\begin{align}\n &\\partial_t n + \\nabla_x \\cdot (n u) = 0 \\\\\n & \\partial_t (n u) + \\nabla_x \\cdot (nu \\otimes u) + \\frac{1}{M^2} \\nabla_x p = \\frac{1}{M^2} J \\times B \\\\\n & \\partial_t W + \\nabla_x \\cdot \\big( (W + p) u \\big) + \\nabla_x \\cdot \\Big( (\\mathcal{E}_e+p_e) \\text{v}_\\text{H} \\Big) = E\\cdot J \\,,\n\\end{align}\n\\end{subequations}\nthe Hall velocity $\\text{v}_\\text{H}$, which can be interpreted as the electron velocity in the ion frame, is defined by\n\\begin{equation}\n \\text{v}_\\text{H} = - \\frac{J}{n}\\,.\n\\end{equation}\nThe ideal MHD equations are classically written under a conservative form using the system total pressure and energy\n\\begin{subequations}\n \\begin{equation}\n p_\\text{TOT} = p + \\frac{B^2}{2} \\,,\\qquad W_\\text{TOT} = W + \\frac{B^2}{2} \\,,\n \\end{equation}\nwriting the system as \n\\begin{align}\n &\\partial_t n + \\nabla_x \\cdot (n u) = 0 \\\\\n & \\partial_t (n u) + \\nabla_x \\cdot \\Big( nu \\otimes u - \\frac{1}{M^2} B \\otimes B \\Big) + \\frac{1}{M^2} \\nabla_x p_\\text{TOT} = 0 \\\\\n & \\partial_t W_\\text{TOT} + \\nabla_x \\cdot \\Big( (W_\\text{TOT} + p_\\text{TOT}) u - (B\\cdot u) B \\Big) = - \\nabla_x \\cdot \\Big( (\\mathcal{E}_e+p_e) \\text{v}_\\text{H} \\Big)\\,,\\\\\n & \\partial_t B - \\nabla_x \\cdot \\Big( B\\otimes (u+\\text{v}_\\text{H}) - (u+\\text{v}_\\text{H}) \\otimes B \\Big) = \\nabla_x \\times \\left( \\frac{\\nabla_xp_e}{n}\\right)\\,.\n\\end{align}\nThis set of equations is supplemented with the electronic energy conservation \\eqref{eq:We:QN}. \n\\end{subequations}\n\nThe ideal MHD equations are recovered from this system assuming an ideal Ohm's law where the current density is assumed small compared to the ion mean velocity and therefore neglected. However, in this simplified framework (omitting the unlike particle collisions), there are no mechanisms preventing the electron mean velocity to depart from that of the ions. Consequently, the generalized Ohm's law incorporates the Hall velocity in complement to the so called ideal Ohm's law. The effect of the resistivity should be consider to derive the ideal mHD regime.\n\nThe drift approximation operated for the electrons amounts to vanishing the electronic Mach numbers $(M\\varepsilon)$. The scale separation introduced by the small electron to ion mass ratio $\\varepsilon$ is not always sufficient to consider this limit independently to vanishing ionic Mach numbers $M \\to 0$. For low ionic Mach number a low frequency filtering may be operated performing the limit of vanishing electronic Mach numbers jointly with the ionic Mach numbers. This asymptotic defined the Massless MHD regime \\cite{besse_model_2004}.\n\n\\begin{comment}\\subsection{Comments and remarks}\\label{sec:MHD:ramrks}\nThe electronic gradient pressure is usually considered in the so-called extended MHD (X-MHD) models but neglected in the ideal system. The particle current density is actually related to the inter-species relative velocities, or equivalently, to the friction term. The scaling of the current density should thus be related to the inter-specie collisions which are out of the scope of the present hierarchy, the resistive MHD regimes.\n\n\nAnother route can be explored with an electron temperature significantly different from that of the ions.\n This property would make possible the increase of the scales separation for the electronic and ionic Mach numbers and thus enlarge the range of validity of the MHD asymptotic for moderate mass ratios. \n\n\\end{comment}\n\n\n\\section{The electrostatic regime and the Boltzmann relation}\\label{sec:electrostatic}\n\n\\subsection{Electrostatic limit of the Maxwell system}\n\nThe electrostatic regime is recovered from the dimensionless Maxwell system \\eqref{eq:Maxwell:adim} by letting $\\alpha$ go to zero. This assumption shall be interpreted as a typical velocity negligible compared to the speed of light. From Amp\\`ere's law\n\\begin{equation}\\label{eq:Ampere:electro}\n \\lambda^2 \\eta \\frac{\\partial {E}}{\\partial t} + \\frac{M}{\\xi} J =\\frac{\\beta \\lambda^2 \\eta}{\\alpha^2} \\nabla_x \\times B \\,,\n\\end{equation}\nthe limit $\\alpha\\to 0$ provides $\\nabla \\times B=0$. Nonetheless, the right hand side of Eq.~\\eqref{eq:Ampere:electro} remains an undetermined form. Therefore the Ampere's law is not well suited for the computation of the electric field in the electrostatic limit.\nHowever, subjected to convenient boundary conditions, the property $\\nabla_x \\times B = 0$ together with $\\nabla_x \\cdot B = 0$ yields $\\partial_t B = 0$. Inserting this property into the Faraday equation \\eqref{VMadim:c} provides an electrostatic electric field: $\\nabla \\times E = 0$. Furthermore, the undetermined form in Eq.~\\eqref{eq:Ampere:electro} is divergence free. Therefore, computing the divergence of Ampere's law provides\n\\begin{equation*}\n \\lambda^2 \\eta \\frac{\\partial \\nabla_x \\cdot {E}}{\\partial t} = - \\frac{M}{\\xi} \\nabla_x \\cdot J \\,,\n\\end{equation*}\nwhich is a well posed problem for the electric field under the condition $\\nabla_x \\times E = 0$. Note that, owing to the continuity equation\n\\begin{equation*}\n \\frac{\\partial \\rho}{\\partial t} + \\frac{M}{\\xi} \\nabla_x \\cdot J = 0\\,,\n\\end{equation*}\noriginating from the conservation of the particle densities \\eqref{eq:cons:ni} and \\eqref{eq:cons:ne}, the divergence of \nAmp\\`ere's law is equivalent to the time derivative of Gauss law, with\n \\begin{equation*}\n \\frac{\\partial}{\\partial t} \\left(\\lambda^2 \\eta \\nabla_x \\cdot E -{\\rho}\\right)=0\\,.\n \\end{equation*}\nTherefore, in the electrostatic regime, the Gauss equation is used to compute the electric field.\n\n\\subsection{Quasi-neutral models at the electronic scale}\nThis analysis is carried out under the assumption of a vanishing magnetic field ($B=0$). The plasma description consider in the sequel is therefore\n\\begin{subequations}\n\\begin{align}\n &\\xi \\partial_t f_i + v \\cdot \\nabla_x f_i + \\eta E \\cdot \\nabla_v f_i = \\frac{ \\xi}{\\kappa} \\Big( \\nu_{ii} \\left(\\mathcal{M}_{n_i,u_i,T_i} - f_i\\right) \\Big) \\,,\\\\ \n & \\xi \\varepsilon \\partial_t f_e + v \\cdot \\nabla_x f_e - \\eta E\\cdot \\nabla_v f_e = \\frac{ \\xi}{\\kappa} \\Big( \\nu_{ee} \\left(\\mathcal{M}_{n_e,u_e,T_e} - f_e\\right) \\Big)\\,\\label{eq:VB:electrostatic}\n\\end{align}\n coupled to the Gauss equation \n\\begin{equation}\n- \\lambda^2 \\eta \\Delta \\phi= n_i - n_e\\,,\n\\end{equation}\n$\\phi$ being the electrostatic potential, with $E = - \\nabla \\phi$ and $n_\\alpha = \\int f_\\alpha dv$. \n\\end{subequations}\nThe quasi-neutrality of the plasma is recovered for vanishing scaled Debye length $\\lambda \\to 0$. In this regime, similarly to the electromagnetic case, an equation needs to be manufactured from the motion of the particles, to compute the electric field. This is classically obtained thanks to the equation of the electronic momentum conservation. The electric field is computed in order for this conservation to be satisfied. \nIn \\cite{DDNSV10} an equivalent approach is proposed. It consists in using the time derivatives of Gauss law to produce\n\\begin{equation}\\label{eq:Gauss:double:derivative}\n- \\lambda^2 \\eta \\frac{\\partial^2}{\\partial t^2} \\Delta \\phi = \\frac{\\partial^2 }{\\partial t^2} \\left( n_i -n_e\\right) \\,,\n\\end{equation}\nIn the quasi-neutral limit ($\\lambda\\to 0$), the electrostatic field is the Lagrangian multiplier of the quasi-neutrality constraint\n\\begin{equation*}\n\\frac{\\partial^2 }{\\partial t^2} \\left( n_i -n_e\\right) = 0\\,.\n\\end{equation*}\nFrom the system~\\eqref{eq:Sys:MM:electrons}, the following identity is recovered\n\\begin{align}\n&\\left(\\frac{\\xi}{M}\\right)^2\\frac{\\partial^2 n_e}{\\partial t^2} = -\\nabla_x \\cdot \\Big(\\nabla_x \\cdot (n_e u_e \\otimes u_e) \\Big)- \\frac{1}{(M\\varepsilon)^2} \\nabla_x \\cdot \\Big(\\nabla_x P_e + \\eta n_e E\\Big) \\,.\\label{eq:dt2:ne} \\\\\n&P_e = p_e \\mathbb{I}\\textnormal{d} + \\kappa \\left< v \\otimes v g_e \\right> \\,, \\qquad \\left< v \\otimes v \\varphi \\right> = \\int \\varphi \\, dv \\,. \n\\end{align}\nResuming the scaling relations of Sec.~\\ref{sec:kinetic:EMHD}: $\\xi=1\/\\varepsilon$ and $\\kappa \\varepsilon > 1$, assuming that the ions are at rest, the evolution of the charge density $n_i -n_e$ is governed by Eq.~\\eqref{eq:dt2:ne} with\n\\begin{equation}\\label{eq:dt2:ne:electrostatic}\n\\frac{\\partial^2 n_e}{\\partial t^2} = -(M\\varepsilon)^2 \\nabla_x \\cdot \\Big(\\nabla_x \\cdot (n_e u_e \\otimes u_e) \\Big)- \\nabla_x \\cdot \\Big(\\nabla_x P_e + \\eta n_e E\\Big) \\,.\\\\\n\\end{equation}\nThe evolution of the density is barely independent of the mean flow velocity but relies on the balance between the pressure and the electric forces. The equation providing the electric potential $\\phi$ is obtained by inserting this relation into Eq.~\\eqref{eq:Gauss:double:derivative} and passing to the limit $\\lambda \\to 0$.\n\nThis yields the following quasi-neutral kinetic plasma description\n\\begin{subequations}\n\\begin{align}\n &\\partial_t f_e + v \\cdot \\nabla_x f_e + \\eta \\nabla_x \\phi \\cdot \\nabla_v f_e = \\frac{1}{\\varepsilon\\kappa} \\Big( \\nu_{ee} \\left(\\mathcal{M}_{n_e,u_e,T_e} - f_e\\right)\\,,\\label{eq:electrostatic:kinetic}\\\\\n&- \\eta \\nabla_x \\cdot \\left( n_e \\nabla_x \\phi \\right) = - \\nabla_x \\cdot \\Big( \\nabla_x \\cdot P_e - (M\\varepsilon)^2 \\nabla_x \\cdot (n_e u_e \\otimes u_e) \\Big) \\,.\n\\end{align}\nAccording the values of $\\kappa$, the collision term in Eq.~\\eqref{eq:electrostatic:kinetic} may be disregarded, defining therefore a non collisional kinetic description. Contrariwise, letting $\\kappa \\to 0$, a fluid description for the electrons may be derived, with \n\\end{subequations}\n\n\\begin{subequations}\\label{eq:sys:electrostatic:electron:fluid}\n\\begin{align}\n &\\frac{1}{(M\\varepsilon)} \\partial_t n_e + \\nabla_x \\cdot (n_e u_e) = 0 \\label{eq:sys:electrostatic:electron:density}\\\\\n \\begin{split}\n &\\frac{1}{(M\\varepsilon)} \\partial_t (n_e u_e) + \\nabla_x \\cdot (n_eu_e \\otimes u_e) + \\frac{1}{(M\\varepsilon)^2} \\left( \\nabla_x p_e -\\, \\eta n_e \\nabla_x \\phi \\right) =0\\,, \\label{eq:sys:electrostatic:electron:momentum} \n \\end{split} \\\\\n& \\frac{1}{(M\\varepsilon)} \\partial_t W_e + \\nabla_x \\cdot \\big((W_e + p_e)u_e \\big) - \\eta n_e \\nabla_x\\phi\\cdot u_e = 0\\,.\n\\end{align}\n\\end{subequations}\nNote that a similar equation to \\eqref{eq:dt2:ne:electrostatic}, but with $P_e = p_e \\mathbb{I}\\textnormal{d}$, may be worked out of the conservation of the electronic density \\eqref{eq:sys:electrostatic:electron:density} and momentum \\eqref{eq:sys:electrostatic:electron:momentum}. This outlines that the electronic dynamics, in particular the electronic speed of sound, is resolved in this model. Comparable models are implemented and numerical experiences in the context of the Asymptotic-Preserving methods in \\cite{CDV07} for fluid plasma description and \\cite{manfredi_vlasov_2011} for kinetic equations.\n\\subsection{Quasi-neutral models at the ionic scale}\nThe typical velocity is chosen to be that of the ions with either $\\xi=1$ for the kinetic descriptions of the ions and $\\xi=M$ for macroscopic models.\n\nThe hybrid modelling investigated in Sec.~\\ref{sec:hybrid} is defined by the scaling relations $\\xi=1$, $(\\varepsilon \\kappa)\\ll 1$ and $\\eta =1$. The equilibria stated by Eqs.~\\eqref{sys:forces:equilibria} yields:\n\\begin{equation}\n T_e \\nabla_x n_e = n_e \\nabla_x \\phi \\,,\n\\end{equation}\nwith an homogeneous electronic temperature. This equation is integrated to provide the so-called Boltzmann relation\n\\begin{equation}\\label{eq:Boltzmann:relation}\n n_e = n_0 \\exp\\left( - \\frac{\\phi}{T_e}\\right) \\,,\n\\end{equation}\n$n_0$ being a constant (independent of the space variable $x$) that should be determined from adequate conditions \\cite{hagelaar_how_2007}. Due to the Boltzmann relation, the quasi-neutral limit is not singular any more. Indeed plugging the Boltzmann relation \\eqref{eq:Boltzmann:relation} into the Gauss equation yields\n\\begin{equation}\n- \\lambda^2 \\Delta \\phi= n_i - n_0 \\exp\\left(-\\frac{\\phi}{T_e}\\right)\\,.\n\\end{equation}\nThis equation is not degenerate for the computation of the electric potential for vanishing $\\lambda$. Indeed, the non linear part of the equation provides a means of computing $\\phi$ in the quasi-neutral limit. This property is thoroughly investigated in \\cite{degond_numerical_2012}.\n\nThe hybrid electrostatic model may be recast into\n\\begin{subequations}\\label{eq:sys:Valsov:Boltzmann}\n\\begin{align}\n&\\partial_t f_i + v \\cdot \\nabla_x f_i - \\nabla_x \\phi \\cdot \\nabla_v f_i = \\frac{1}{\\kappa} \\Big( \\nu_{ii} \\left(\\mathcal{M}_{n_i,u_i,T_i} - f_i\\right) \\Big)\\,,\\\\\n& \\phi = - T_e \\ln\\left(\\frac{n_i}{n_0}\\right) \\,.\n\\end{align}\n\\end{subequations}\n\nLetting $\\kappa \\to 0$ together with $\\kappa\/(M\\varepsilon)\\ll 1$ and $\\xi=M$ yields the quasi-neutral fluid model \n\\begin{subequations}\\label{eq:sys:Euler:Boltzmann}\n\\begin{align}\n &\\partial_t n_i + \\nabla_x \\cdot (n_i u_i) = 0 \\\\\n &\\partial_t (n_i u_i) + \\nabla_x \\cdot (n_iu_i \\otimes u_i) + \\frac{1}{M^2} \\left(\\nabla_x p_i + n_i \\nabla_x\\phi\\right) =0 \\,, \\\\\n& \\partial_t W_i + \\nabla_x \\cdot \\big((W_i + p_i)u_i \\big) + n_i \\nabla_x\\phi\\cdot u_i = 0\\,,\\\\\n& \\phi = - T_e \\ln\\left(\\frac{n_i}{n_0}\\right) \\,.\n\\end{align}\n\\end{subequations}\nIn the models \\eqref{eq:sys:Valsov:Boltzmann} and \\eqref{eq:sys:Euler:Boltzmann} following the evolution of the plasma at the ionic scale, the fast electronic dynamics introduced by the electron inertia is filtered out of the equations by performing the low frequency limit $(M\\varepsilon) \\to 0$.\n\\begin{comment}\n\\section{The current coupling Scheme}\nIn this section the ``Current Coupling Scheme'' \\cite{tronci_hybrid_2014} is investigated with the tools introduced in the precedent sections. The framework of interest is that of a plasma with two populations of electrons. The first one is associated to the plasma bulk with a large density, the second one being a hot component, {\\it i.e.} with a large temperature. It is assumed to be a small fraction of the total electron population with a large temperature compared to the plasma bulk. The density and the typical temperature of this component will be denoted by $n_H$ and $T_H$, the bulk characteristics being denoted $(n_B,T_B)=(n_e,T_e)$. The collision frequency of the population $\\alpha$ against the population $\\beta$, $\\nu_{ee}^{\\alpha\\beta}$ satisfies the following scaling relations\n\\begin{align}\n \\label{eq:2}\n \\nu_{ee}^{HH}= \\varrho \\Theta \\nu_{ee}\\,, \\qquad \\nu_{ee}^{HB}= \\Theta\\nu_{ee} \\,, \\qquad \\nu_{ee}^{BH}= \\varrho \\nu_{ee} \\,, \\qquad \\nu_{ee}^{BB} = \\nu_{ee}\\,,\n\\end{align}\nwhere the reference collision frequency $\\nu_{ee}$ is computed with the parameters of the plasma bulk $(n_e,T_e)$ and therefore accounts for the collisions of the bulk particles. The other frequencies relate to this reference thanks to the following dimensionless parameters\n\\begin{equation}\n \\label{eq:3}\n \\varrho = \\frac{n_H}{n_e} \\,, \\qquad \\Theta^2 = \\frac{T_e}{T_H} \\,.\n\\end{equation}\nThe three components of the plasma are described by a Valsov equation, with\n\n\\begin{subequations}\n\\begin{align}\n \\label{eq:1}\n \\begin{split}\n&\\xi \\varepsilon \\Theta \\partial_t f_e^H + v \\cdot \\nabla_x f_e^H - \\eta\\Theta(E+ \\frac{\\beta}{\\varepsilon \\xi \\Theta} v \\times B)\\cdot \\nabla_v f_e^H = \\\\\n& \\hspace*{6cm} \\frac{\\Theta^2 \\xi}{\\kappa} \\nu_{ee} \\Big(\\varrho \\mathcal{Q}\\left(f_e^H,f_e^H\\right) + \\mathcal{Q}\\big(f_e^H,(\\Theta^{D_v\/2}\\varrho)^{-1} f_e^B\\big) \\Big)\\,,\\\\ \n \\end{split}\\\\\n \\begin{split}\n&\\xi \\varepsilon \\partial_t f_e^B + v \\cdot \\nabla_x f_e^B - \\eta(E+ \\frac{\\beta}{\\varepsilon \\xi} v \\times B)\\cdot \\nabla_v f_e^B = \\\\\n& \\hspace*{6cm} \\frac{ \\xi}{\\kappa} \\nu_{ee} \\Big( \\mathcal{Q}\\left(f_e^B,f_e^B\\right) + \\varrho \\mathcal{Q}\\big(f_e^B,(\\Theta^{D_v\/2}\\varrho)f_e^H\\big) \\Big)\\,,\\\\ \n \\end{split}\\\\\n &\\xi \\varepsilon \\partial_t f_i + v \\cdot \\nabla_x f_i - \\eta(E+ \\frac{\\beta}{\\varepsilon \\xi} v \\times B)\\cdot \\nabla_v f_i = \\frac{ \\xi}{\\kappa} \\nu_{ii} \\mathcal{Q}\\left(f_i,f_i\\right)\\,.\n\\end{align}\n\\end{subequations}\nIn this system the electron-ion collisions are rightfully disregarded, owing to the scaling relations stated by Eq.~\\eqref{eq:scaling:relation:freq}.\n\nTo simplify the derivation of the reduced system, the procedure is decomposed into two steps. \n\nFirst the limit $(\\varrho,\\Theta) \\to 0$ is considered, which amounts to assume that the density of the hot component is marginal compared to that of the bulk, while its temperature is much larger. This yields to the neglect of the inter-population collision processes as well as the inner collisions within the hot component. The following kinetic system is thus recovered\n\\begin{align}\n \\label{eq:two:pop}\n&\\xi \\varepsilon \\partial_t f_e^H + v \\cdot \\nabla_x f_e^H - \\eta(E+ \\frac{\\beta}{\\varepsilon \\xi} v \\times B)\\cdot \\nabla_v f_e^H = 0\\,,\\\\ \n&\\xi \\varepsilon \\partial_t f_e^B + v \\cdot \\nabla_x f_e^B - \\eta(E+ \\frac{\\beta}{\\varepsilon \\xi} v \\times B)\\cdot \\nabla_v f_e^B = \\frac{ \\xi}{\\kappa} \\Big( \\nu_{ee} \\mathcal{Q}\\left(f_e^B,f_e^B\\right) \\Big)\\,. \\\\\n&\\xi \\partial_t f_i + v \\cdot \\nabla_x f_i + \\eta(E+ \\frac{\\beta}{\\xi} v \\times B)\\cdot \\nabla_v f_i = \\frac{ \\xi}{\\kappa} \\nu_{ii} \\mathcal{Q}\\left(f_i,f_i\\right)\\,. \n\\end{align}\nSecond, the fluid limit is considered for both species in the plasma Bulk, as well as the massless asymptotics for the electrons letting $(\\kappa,\\varepsilon) \\to 0$ we obtain\n\n\n\\cite{todo_magnetohydrodynamic_1995,spatschek_high_2012} drift kinetic desciption of the hot electron component.\n\\end{comment}\n\\section{Conclusions}\\label{sec:conclusions}\n\nIn this paper, we propose an asymptotic analysis bridging kinetic plasma descriptions coupled to the Maxwell system and single plasma modelling. Two frameworks are investigated. The first one is devoted to electromagnetic fields. The plasma is represented by a hierarchy of models starting with the bi-kinetic Vlasov-Maxwell system while ending with the single fluid Hall-Magneto-Hydro-Dynamic model. The second framework is dedicated to electrostatic fields. In this context, the asymptotic analysis permits to derive a hierarchy of models bridging the bi-kinetic Vlasov-Poisson system to a single fluid representation consisting of a fluid system for the ions coupled to the Boltzmann relation for the electrons.\nThe investigations proposed within this document unravel different asymptotic parameters explaining the transition from one model to the other. The effort conducted in the present work consists in relating these asymptotic parameters to characteristics of the system. This means that the transition from one model to the other may be explained by a change in the plasma characteristics or the typical scales at which the plasma is observed. \n\nThis last notion is important in the perspective of designing a numerical method. Indeed, the discretization of these equations requires the use of a mesh interval as well as a time step. These two numerical parameters define the typical space and time scale, therefore a velocity scale as well, the numerical method is aimed at capturing. This is related to the parameter $\\xi$ used for the asymptotic analysis. Regarding the quasi-neutral modellings investigated within this document, different choices are operated for this parameter. The fastest scales are related to the electron thermal velocity when the fast electron dynamics is intended to be captured by the model. This is for instance the value selected for Electron-MHD regimes, either in the fluid or kinetic frameworks. For hybrid or single fluid plasma representations, the velocity scale is reduced to that of the mean flow of the plasma defined by the massive ions. The organisation of Sec.~\\ref{sec:electrostatic} is aimed at emphasizing this feature.\n\nThe second parameter, already established in precedent work (see \\cite{DegDelDoy,DDSprep,DDNSV10}), is the generalized scaled Debye length $\\lambda$. It actually encompasses the scaled Debye length and the ratio of the typical velocity to the speed of light. Vanishing the generalised Debye length amounts to filter out from the equations the small scales attached to the charge separation as well as those related to the propagation of electromagnetic waves at the speed of light. The quasi-neutral limit is therefore a low frequency limit. Quasi-neutrality breakdowns may be explained by a refinement of the typical length scale or, for instance, a decrease of the plasma density. This changes are well accounted for by the asymptotic parameters set up to perform the analysis.\n\n\n\nThe vanishing of the electron inertia is related to a low Mach regime $(M\\varepsilon)\\ll 1$. \nIn single fluid plasma representation, the fast electron dynamic is dropped out of the equations to perform a low frequency filtering, the system being assumed to evolve at a lower speed attached to the massive ions. Nonetheless, the nature of the flow may be subjected to significant changes explaining that the particle inertia becomes significant again to account for the system evolution. This is illustrated in studies of plasma flows in sheaths, with supersonic particles, while the mean plasma velocity is small compared to the speed of sound in the plasma bulk \\cite{chodura_plasma_1986,stangeby_plasma_2000,GMAH08,DGAH10,manfredi_plasma-wall_2008}. Accounting for this phenomena is possible selecting the appropriate typical velocity to resolve or filter the fast electron dynamic.\n\nFinally the fluid assumption is classically related to a vanishing of the Knudsen number $\\kappa\\ll 1$ accounting for the relaxation of the distribution function towards the local thermodynamic equilibrium. The interplay of these four dimensionless parameters ($\\xi$, $\\lambda$, $M\\varepsilon$, $\\kappa$) define a hierarchy of reduced models bridging kinetic plasma descriptions coupled to the Maxwell system to quasi-neutral plasma representations including kinetic, hybrid and single fluid modellings. These later models, namely the Hall-MHD and the Boltzmann relation are widely used to design efficient numerical methods. The asymptotic analysis conducted within this document draws the guidelines for the derivation of numerical methods implementing local up-scalings, therefore widening the use of numerical methods discretizing these reduced models.\n\n\n\n\n \n\n \\section*{Acknowledgments} \nThis work has been supported by ``F\\'ed\\'eration de Fusion pour la Recherche par Confinement Magn\\'etique'' (FrFCM) in the frame of the project\n\"BRIDIPIC: BRIDging Particle-In-Cell methods and low frequency numerical models of plasmas\".\\newline\nA. Crestetto acknowledges support from the french ``Agence Nationale pour la Recherche (ANR)'' in the frame of the projects \nMoHyCon ANR-17-CE40-0027-01 and MUFFIN ANR-19-CE46-0004.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nCompletely integrable Hamiltonian systems admitting Lax representation \\cite{Lax} have been an object of constant interest in physics and mathematics during the last forty years.\n\nAn important problem in the theory of integrable systems still to be solved in general is the problem of variable separation. The separated variables $x_i$, $p_j$, $i,j=\n 1,\\dots,N $ are a set of (quasi)canonical coordinates such that the following system of {equations} is\nsatisfied~\\cite{SklSep}\n\\begin{gather*\n \\Phi_i(x_i, p_i,I_1, \\dots ,I_N, C_1,\\dots,C_r) = 0, \\qquad i=1,\\dots,N,\n \\end{gather*}\n where $\\Phi_i$ are certain functions, $I_k$ are Poisson-commuting integrals of motion, $C_i$ are Casimir functions and $N$ is half of the dimension of the phase space.\n\nThe separated coordinates (whenever they exist) allow one to write Abel-type equations (see Section~\\ref{section2.1}) which, in their turn, provide a possibility to solve explicitly the Hamilton equations of motion upon resolving the corresponding Abel--Jacobi inversion problem.\n Separated variables are also important when solving quantum\nintegrable models~\\cite{SklSep}. That is why the construction of variable separation is\n a central issue in the theory of both classical and quantum integrable systems.\n\n In order to construct separated variables in many cases one can use a pair of \\emph{separating functions} $A(u)$, $B(u)$, which depend on the dynamical variables and on a complex parameter $u$. The coordinates $x_1, \\dots, x_N$ are determined as zeroes of $B(u)$,\n\\begin{gather}\\label{xi}\nB(x_i)=0, \\qquad i=1,\\dots,N,\n\\end{gather}\nand (quasi)momenta are obtained as values of $A(u)$ in these zeros\n\\begin{gather}\\label{pi}\np_i=A(x_i),\\qquad i=1,\\dots,N .\n\\end{gather}\n\nIn the so-called Lax-integrable case (i.e., when the Hamilton equations of motion can be written in Lax form), a prescription to obtain separating functions $B(u)$, $A(u)$ for $\\mathfrak{gl}(2)$-valued Lax matrices $L(u)=\\sum\\limits_{i,j=1}^{2}L_{ij}(u)X_{ij} $, where $X_{ij}$, $i,j=1,2$ is a standard basis of\n$\\mathfrak{gl}(2)$ $(X_{ij})_{\\alpha\\beta}=\\delta_{i\\alpha}\\delta_{j,\\beta}$, have been introduced in \\cite{SklTrig, SklRat,SklSep}\\footnote{Often in the literature the choice $B(u)=L_{12}(u)$ is made. Nevertheless, since for the Cartan-invariant $r$-matrices the choice between $L_{12}(u)$ or $L_{21}(u)$ is arbitrary, we prefer the above-made choice.}\n\\begin{gather}\\label{sfstand}\nB(u)=L_{21}(u), \\qquad A(u)=L_{11}(u).\n\\end{gather}\nThe equations of separation in this case coincide with the spectral curve of the Lax matrix\n\\begin{gather*}\n \\Phi_i(x_i, p_i, I_1, \\dots ,I_N, C_1,\\dots,C_r) = \\det (L(x_i)-p_i \\, {\\rm Id})=0, \\qquad i=1,\\dots,N.\n\\end{gather*}\nThe constructed variables $x_i$, $p_i$ should be (quasi)canonical in order for the theory to work\n\\begin{gather*}\n\\{ x_i, p_j\\}=\\delta_{ij} f_i(x_i,p_i), \\qquad \\{x_i,x_j\\}=0,\\qquad \\{p_i,p_j\\}=0, \\qquad \\forall\\,\ni,j= 1,\\dots,N.\n\\end{gather*}\nThe separating functions $A(u)$ and $B(u)$ have to satisfy an appropriate Poisson algebra \\cite{SkrDub,SkrDub2} in order to produce (quasi)canonical coordinates (see Section~\\ref{section2.2} below).\n The right-hand side of this algebra, as well as the very definition (\\ref{xi})--(\\ref{pi}), is asymmetric in the functions~$A(u)$,~$B(u)$. Nevertheless, in some cases the algebra of separating functions is symmetric in~$A(u)$ and~$B(u)$ and has the following form\n\\begin{subequations}\\label{sepalg0}\n\\begin{gather} \\label{sepalg01}\n \\{B(u),B(v)\\}=b(u,v)A(v)B(u)- b(v,u)A(u)B(v),\n\\\\ \\label{sepalg02}\n \\{A(u),B(v)\\}=\\alpha(u,v)A(v)B(u)- \\beta(u,v)A(u)B(v),\n\\\\ \\label{sepalg03}\n \\{A(u),A(v)\\}=a(u,v)A(v)B(u)- a(v,u)A(u)B(v).\n\\end{gather}\n\\end{subequations}\n This situation occurs when the Lax matrix satisfy quadratic tensor brackets \\cite{TF, Skl}\n \\begin{gather}\\label{rmbr0}\n\\{{L}(u)\\otimes 1,1\\otimes\n{L}(v)\\}=[r(u,v),{L}(u)\\otimes L(v)].\n\\end{gather}\nHere $r(u,v)=\\sum\\limits_{i,j,k,l=1}^{2}r_{ij,kl}(u, v)X_{ij} \\otimes\nX_{kl}$, $X_{ij}$, is a skew-symmetric classical $r$-matrix.\n\nThe symmetry of the separating algebra (\\ref{sepalg0}) poses a natural question: is it possible (when the separating algebra is symmetric) to exchange the roles of the separating functions? That is, is it possible to define new separated variables as follows\n\\begin{gather}\\label{xi'pi'}\nA(x_i)=0, \\qquad p_i=B(x_i),\\qquad i=1,\\dots,N ?\n\\end{gather}\n\nThe answer to this question is not obvious. Indeed, while the reversed definition (\\ref{xi'pi'}) in the symmetric case guarantees the (quasi)canonicity of the constructed separated coordinates, it does not guarantee the existence of the equations of separation for them.\\footnote{See Remark~\\ref{remark5} for another -- ``banal'' explanation of the reversed definition (\\ref{xi'pi'}) which is not used here.}\n\nIn the present paper we are going to answer two general questions:\n\\begin{enumerate}\\itemsep=0pt\n\\item For what $\\mathfrak{gl}(2)\\otimes \\mathfrak{gl}(2)$-valued $r$-matrices the separating algebra of functions $A(u)$ and $B(u)$ defined by (\\ref{sfstand}) is symmetric in $A(u)$ and $B(v)$, i.e., has the form (\\ref{sepalg0})?\n\\item When does the reversed definition (\\ref{xi'pi'}) produce separated variables for such $r$-matrices? That is, when do the corresponding quasi-canonical coordinates satisfy equations of separation with the initial algebra of first integrals?\n\\end{enumerate}\n\nFor the convenience of the reader we formulate our answers already in the Introduction. In particular, the answer to the first question is contained in the following proposition.\n\\begin{Proposition} The functions $A(u)$ and $B(u)$ defined by~\\eqref{sfstand} satisfy the algebra \\eqref{sepalg0} with respect to the brackets~\\eqref{rmbr0} if the components of the $r$-matrix satisfy the following conditions\n\\begin{subequations}\\label{condrms0}\n\\begin{gather}\\label{condrm01}\nr_{21,21}(u,v)=0, \\qquad r_{21,11}(u,v)=0, \\qquad r_{11,21}(u,v)=0,\n\\\\\n\\label{condrm02}\nr_{12,12}(u,v)=0, \\qquad r_{12,22}(u,v)=0, \\qquad r_{22,12}(u,v)=0,\n\\\\\n\\label{condrm03}\n r_{22,22}(u,v)=r_{11,11}(u,v).\n\\end{gather}\n\\end{subequations}\n\\end{Proposition}\n\n There are at least two $\\mathfrak{gl}(2)\\otimes \\mathfrak{gl}(2)$ valued classical skew-symmetric $r$-matrices satisfying the condition (\\ref{condrms0}): the\n standard rational and standard trigonometric $r$-matrices. For the case of these $r$-matrices we proceed with the answer to the second question. For this purpose we define also the class of the Lax matrices under the consideration. We consider the most physically important case of the Lax matrices of the XXX and XXZ models of $N$ classical spins with a~twisted periodic boundary conditions, i.e., the Lax matrices of the following form\n \\begin{gather}\\label{laxc}\nL(u)=L^{(\\nu_1)}(u) \\cdots L^{(\\nu_N)}(u) C,\n\\end{gather}\nwhere $C=\\sum\\limits_{i,j=1}^2 c_{ij}X_{ij}$ is a two by two twist matrix satisfying the following condition\n\\begin{gather}\\label{rCC}\n[r(u,v),C\\otimes C]=0\n\\end{gather}\nand $L^{(\\nu_i)}(u)$ is the Lax matrix with a simple pole in the point $u=\\nu_i$, corresponding to $i$-th cite of the classical spin chain, where the classical spins satisfy the Poisson brackets of $\\mathfrak{gl}(2)^{\\oplus N}$ (in the case of the rational $r$-matrix) and of the direct sum of $N$ trigonometric Sklyanin-type algebras (in the case of trigonometric $r$-matrix). Observe that for the case of all the considered quadratic Poisson algebras the non-trivial integrals of motion are generated by $I(u)=\\operatorname{tr} L(u)$.\n\nThe following theorem holds true:\n\n\\begin{Theorem\n\\looseness=-1 Let the coordinates $x_i$, $p_i$, $i=1,\\dots,N $ be defined by \\eqref{xi'pi'}, the functions $A(u)$ and $B(u)$ be defined by \\eqref{sfstand} and the Lax matrix $L(u)$ be defined by \\eqref{laxc}. Let the mat\\-rix~$C$ be such that $c_{11}\\neq 0$. Then the coordinates~$x_i$, $p_i$ are separated coordinates for the classical integrable system with the algebra of integrals of motion generated by $I(u)=\\operatorname{tr} L(u)$ if and only if\n\\begin{gather*} \\det C=0.\\end{gather*}\n\\end{Theorem}\n\nFor the rational XXX case the matrix $C=\\sum\\limits_{i,j=1}^2 c_{ij}X_{ij}$ is an arbitrary constant degenerated matrix. In this case the case the equations of separation are written as follows\n\\begin{gather} \\label{eqsep0}\nc_{12} p_i=c_{11} I(x_i), \\qquad i=1,\\dots,N,\n\\end{gather}\nand we distinguish to cases: $c_{12}\\neq 0$ and $c_{12}=0$. In the first case the curve of separation is a~rational one and the corresponding Abel equations are the following\n\\begin{gather*}\n\\sum\\limits_{i=1}^N \\dfrac{c_{11}x_i^{N-j}}{c_{12}p_i}\n\\frac{{\\rm d}x_i}{{\\rm d}t_k} =-\\delta^j_{k}, \\qquad j,k=1,\\dots,N ,\n\\end{gather*}\nwhere $\\frac{{\\rm d}x_i}{{\\rm d}t_k}=\\{I_k,x_i\\}$, $k=1,\\dots,N $ etc.\\ are easily integrated in terms of elementary functions. This is a consequence of the fact that the separation curve~(\\ref{eqsep0}) is a rational one.\n\nThe second case $c_{12}=0$ is even more special. In this case we also have that $c_{22}=0$ and the equations of separation coincides with separating polynomial and acquires the form\n\\begin{gather}\\label{specsepeq2}\n I(x_i)=A(x_i)=0, \\qquad i=1,\\dots,N .\n\\end{gather}\nThe coordinates $x_i$ in this case become functions of the integrals of motion and may be identified with action variables. The Abel equations are written for the (quasi)conjugate variables $p_i$\n\\begin{gather*}\n\\sum\\limits_{i=1}^N \\dfrac{x_i^{N-j}}{\\partial_{x_i} I(x_i) }\\frac{1}{p_i}\n\\frac{{\\rm d}p_i}{{\\rm d}t_k} =\\delta^j_{k}, \\qquad j,k=1,\\dots,N ,\n\\end{gather*}\nand produce linear differential equations for the corresponding angle coordinates $\\phi_i=\\ln p_i$.\n\nIn the trigonometric XXZ case the constant twist matrix $C$ is diagonal, \\mbox{$C=\\sum\\limits_{i=1}^2 c_{ii}X_{ii}$} and $c_{22}=0$ in the degenerated case. That is why in this case there is only a special degenerated case of non-standard separation of variables characterized by the equation of separation~(\\ref{specsepeq2}), where the coordinates of separation coincide with action variables and the canonically conjugated variables coincide with the angles of the Liouville theorem.\n\nTo the best of our knowledge these are the first examples (at least in the Lax-integrable case) when the variables of separation coincide with the action-angle variables and their construction provides immediate solution of the equations of motion without performing of a (generally speaking difficult) task of solving of the Abel--Jacobi inversion problem.\n\nAt the end of the introduction let us make several bibliographical comments. The Lax-pair based approach to the variable separation in its general form was proposed by Sklyanin in~\\cite{SklSep} as a development of his previous idea \\cite{SklToda,SklTrig, SklRat}. In the classical case the idea of the approach may be traced further back to the papers \\cite{Alber,NV}. In the quantum case the approach has obtained a lot of attention in the literature, let us mention only the series of recent papers \\cite{Maillet17,MailletJPA18, MailletJMP18,RV}.\n In the classical case, unfortunately, there have been very few works on the subject. For the classical XXX model we can mention several papers \\cite{DD,Geht,Scott,SklRat,SklYan,SklSep}.\nFor the classical XXZ model we can mention only two papers on the subject \\cite{SklTrig} and \\cite{SkrDub2}. To fill this gap in the knowledge and to study the corresponding classical models in more details is one of the aims of our paper.\n\nThe structure of the present paper is the following: in Section~\\ref{section2} we remind general notions of the classical variable separation theory, in Section~\\ref{section3} we consider Lax-integrable case, in Sections~\\ref{section4} and \\ref{section5} we concentrate on the examples of the classical XXX and XXZ models. In these sections we also consider $N=2$ examples, investigating the corresponding cases in details. In particular, we explicitly find the reconstruction formulae for them, expressing the initial dynamical variables via the constructed coordinates of separation and the values of the Casimir functions. At last, in Section~\\ref{section5} we conclude and discuss the open problems.\n\n\\section{Separation of variables}\\label{section2}\n\\subsection{Definitions and notations}\\label{section2.1}\nLet us recall the definitions of Liouville integrability and\nseparation of variables in the general theory of Hamiltonian\nsystems. An integrable Hamiltonian system with $N$ degrees of\nfreedom is determined on a $2N$-dimensional symplectic manifold\n$\\mathcal{M}$ (symplectic leaf in $(\\mathcal{P},\\{\\ ,\\ \\})$ and~$N$ independent functions (first integrals) $I_j$ commuting with respect to the Poisson bracket{\\samepage\n\\begin{gather*}\\{I_i ,I_j\\} = 0 ,\\qquad i,j =1,\\dots,N \\end{gather*}\n(for the Hamiltonian $H$ of the system may be taken any first integral $I_j$).}\n\n To find separated variables means to find\n(at least locally) a set of coordinates $x_i$, $p_j$, $i,j =1,\\dots,N$ such that there exist $N$ relations\n\\begin{gather*}\n \\Phi_i(x_i, p_i, I_1,\\dots, I_N, C_1,\\dots,C_r) = 0, \\qquad i=1,\\dots,N,\n \\end{gather*}\nwhere $C_i$, $i= 1,\\dots,r $ are Casimir functions and the coordinates $x_i$, $p_j$, $i,j =1,\\dots,N $ are\ncanonical\n\\begin{gather*}\n\\{ x_i, p_j\\}=\\delta_{ij}, \\qquad \\{x_i,x_j\\}=0,\\qquad \\{p_i,p_j\\}=0, \\qquad \\forall\\, i,j =1,\\dots,N .\n\\end{gather*}\nIn the present paper it will be convenient for us to work with quasi-canonical coordinates satisfying the following Poisson brackets\n\\begin{gather*\n\\{ x_i, p_j\\}=\\delta_{ij} p_j, \\qquad \\{x_i,x_j\\}=0,\\qquad \\{p_i,p_j\\}=0, \\qquad \\forall\\, i,j =1,\\dots,N .\n\\end{gather*}\nClearly the variables $x_i$, $\\phi_j=\\log p_j$ will be canonical then.\n\nIt is possible to show that the coordinates of separation $x_i$ satisfy the Abel-type equations\n\\begin{gather}\\label{abel0}\n\\sum\\limits_{i=1}^N \\dfrac{ \\dfrac{\\partial \\Phi_i(x_i, p_i, I_1,\\dots, I_N, C_1,\\dots,C_r)}{\\partial I_k} }{ \\dfrac{\\partial \\Phi_i(x_i, p_i ;I_1,\\dots, I_N, C_1,\\dots,C_r)}{\\partial p_i}} \\frac{1}{ p_i}\\frac{{\\rm d}x_i}{{\\rm d}t_j} = - \\delta_{kj}, \\qquad \\forall\\, k,j=1,\\dots,N,\n\\end{gather}\nand similar Abel-type equations are satisfied by the momenta of separation $p_i$\n\\begin{gather}\\label{abelMom0}\n\\sum\\limits_{i=1}^N \\dfrac{ \\dfrac{\\partial \\Phi_i(x_i, p_i, I_1,\\dots, I_N, C_1,\\dots,C_r)}{\\partial I_k} }{ \\dfrac{\\partial \\Phi_i(x_i, p_i ;I_1,\\dots, I_N, C_1,\\dots,C_r)}{\\partial x_i}} \\frac{1}{ p_i}\\frac{{\\rm d}p_i}{{\\rm d}t_j} = \\delta_{kj}, \\qquad \\forall\\, k,j=1,\\dots,N .\n\\end{gather}\nThese equations are the last step before the integration of the classical equations of motion.\n\n\\subsection{The method of separating functions}\\label{section2.2}\nLet $B(u)$ and $A(u)$ be some functions of the dynamical variables and of an auxiliary complex parameter $u$, which is constant with respect to the bracket $\\{\\ ,\\ \\}$. Let the points $x_i$, $i=1,\\dots,N $ be zeros of the function $B(u)$ and $p_i$, $i=1,\\dots,N $ be the values of $A(u)$ in these points. We wish to obtain (quasi)canonical Poisson brackets among these new coordinates using the Poisson brackets between functions $B(u)$ and $A(u)$. The following proposition holds true \\cite{SkrDub}.\n\\begin{Proposition}\\label{canon}\nLet the coordinates $x_i$ and $p_j$, $i,j=1,\\dots,p$ be defined as $B(x_i)=0$, $p_j=A(x_j)$. Let the functions $A(u)$, $B(u)$ satisfy the following Poisson algebra\n\\begin{subequations}\\label{sepalg}\n\\begin{gather} \\label{sepalg1}\n \\{B(u),B(v)\\}=b(u,v)B(u)- b(v,u)B(v),\\\\\n \\label{sepalg2} \\{A(u),B(v)\\}=\\alpha(u,v)B(u)- \\beta(u,v)B(v),\\\\\n \\label{sepalg3} \\{A(u),A(v)\\}=a(u,v)B(u)- a(v,u)B(v).\n\\end{gather}\n\\end{subequations} Then the Poisson bracket {between} the functions $x_i$ and $p_j$, $\\forall\\, i,j=1,\\dots,N $ are the following\n\\begin{gather*}\n \\{x_i,x_j\\}=0, \\qquad \\forall\\, i,j=1,\\dots,N,\\\\\n \\{x_j,p_i\\}=0, \\qquad \\text{if} \\quad i\\neq j,\\\\\n \\{p_i,p_j\\}=0, \\qquad \\forall\\, i,j=1,\\dots,N .\n\\end{gather*}\n If, moreover also the condition\n\\begin{gather*}\n \\lim\\limits_{u\\rightarrow v} (\\alpha(u,v)B(u)- \\beta(u,v)B(v))= A(v) \\partial_v B(v) +\\gamma(v) B(v)\n\\end{gather*}\nholds, then the corresponding Poisson brackets are quasi-canonical, i.e.,\n\\begin{gather*}\n \\{ x_i, p_i\\}=p_i, \\qquad \\forall\\, i=1,\\dots,N .\n\\end{gather*}\n\\end{Proposition}\n\n\\begin{Remark}\\label{remark1}Observe that the coefficients $a(u,v)$, $b(u,v)$, $\\alpha(u,v)$, $\\beta(u,v)$, $\\gamma(v)$ above may depend not only on the spectral parameters but also on the dynamical variables.\n\\end{Remark}\n\n\\begin{Remark}\\label{remark2} Observe that in general separating algebra (\\ref{sepalg}) is asymmetric in the functions $A(u)$, $B(v)$. Nevertheless for some dynamical coefficients $a(u,v)$, $b(u,v)$,\n$\\alpha(u,v)$, $\\beta(u,v)$ it may become symmetric in the functions $A(u)$, $B(v)$. In such a case the functions $A(u)$, $B(v)$ become interchangeable and one can ``invert'' the procedure, defining separated coordinates also as follows: $A(x_i)=0$, $p_j=B(x_j)$. This is the situation that will be studied in this article.\n\\end{Remark}\n\n\\section{Separation of variables: Lax-integrable case}\\label{section3}\n\\subsection{The equations of separation}\\label{section3.1}\nLet us specify the above theory, i.e., equations of separation and separating functions for the Lax-integrable case, when Hamiltonian equations of motion with respect to a Hamiltonian $H$ can be written in Lax form \\cite{Lax} with a spectral-parameter-dependent Lax matrix\n\\begin{gather*\n\\dot L(u)=\\left[L(u), M_H(u)\\right]\n\\end{gather*}\nAccording to the ``magic recipe'' of Sklyanin in this case the role of all equations of separation is played by a single equation, namely the spectral curve of the Lax matrix\n \\begin{gather*\n \\Phi_i(x_i, p_i, I_1,\\dots, I_N, C_1,\\dots,C_r) = \\det (L(x_i) - p_i\\, {\\rm Id})=0, \\qquad i=1,\\dots,N .\n \\end{gather*}\nThis hypothesis works good for the case of the $\\mathfrak{gl}(n)$-valued Lax matrices \\cite{AHH, DD, Geht, Scott,SklSep}. In what follows we will consider the simplest case of the $\\mathfrak{gl}(2)$-valued Lax matrices.\n\\subsection{The separating functions}\\label{section3.2}\nLet $X_{ij}$, $i,j=1,2$ be a standard basis in $\\mathfrak{gl}(2)$ with the\ncommutation relations\n\\begin{gather*\n[ X_{ij}, X_{kl}]= \\delta_{kj} X_{il}-\\delta_{il}X_{kj}.\n\\end{gather*}\nThe $\\mathfrak{gl}(2)$-valued Lax matrix is written as follows\n\\begin{gather*}L(u)=\\sum\\limits_{i,j=1}^2 L_{ij}(u)X_{ij}.\\end{gather*}\nFollowing the ``magic recipe'' in its standard version \\cite{SklSep} we will assume that the separating functions $A(u)$ and $B(u)$ are defined as follows:\n\\begin{gather}\\label{AB}\nA(u)=L_{11}(u), \\qquad B(u)=L_{21}(u).\n\\end{gather}\n\n\\subsection{The separating algebra and its symmetries}\\label{section3.3}\nNow we will require that the algebra of the functions $A(u)$ and $B(u)$ defined by~(\\ref{AB}) have the particular form~(\\ref{sepalg}). For this purpose it is necessary at first to define the Poisson brackets among the components of the Lax matrix.\n\nIn this paper we will consider the case of the so-called quadratic Sklyanin bracket~\\cite{Skl}\n\\begin{gather}\\label{rmbr}\n\\{{L}(u)\\otimes 1,1\\otimes {L}(v)\\}=\\big[r^{12}(u,v),{L}(u)\\otimes L(v)\\big],\n\\end{gather}\nwhere\n\\begin{gather}\\label{rmcf}\nr^{12}(u,v)=\\sum\\limits_{i,j,k,l=1}^{2}r_{ij,kl}(u, v)X_{ij}\n\\otimes X_{kl}\n\\end{gather}\nis a skew-symmetric classical $r$-matrix: $r^{12}(u,v)=-r^{21}(v,u)$ (see \\cite{BD, TF, RF, Skl}).\n\nThe algebra (\\ref{sepalg}) is satisfied by the above functions $A(u)$ and $B(u)$ under certain conditions on the $r$-matrix. In more detail, the following proposition holds true.\\footnote{See \\cite{SkrDub,SkrDub2} for the generalization of this proposition onto the case of $\\mathfrak{gl}(n)$-valued Lax matrices.}\n\\begin{Proposition} The functions $A(u)$ and $B(u)$ defined by \\eqref{AB} satisfy the algebra \\eqref{sepalg} with respect to the brackets \\eqref{rmbr} if the components of the $r$-matrix satisfy the following conditions\n\\begin{gather}\\label{condrmg}\nr_{21,21}(u,v)=0, \\qquad r_{21,11}(u,v)=0, \\qquad r_{11,21}(u,v)=0.\n\\end{gather}\n\\end{Proposition}\n\n\\begin{proof} The proof of the proposition is achieved by direct calculation.\\end{proof}\n\nObserve that the algebra (\\ref{sepalg}) is very asymmetric in the functions $A(u)$ and $B(u)$. It is asymmetric also in the considered case after imposing the conditions~(\\ref{condrmg}).\nWe wish to investigate the question when the quadratic Poisson algebra of the functions $A(u)$ and $B(u)$ not only satisfies~(\\ref{sepalg}) but also has its right-hand side to be symmetric in functions~$A(u)$ and~$B(u)$. Evidently, such symmetry will require more rigid conditions on the $r$-matrix.\n\nThe following proposition holds true.\n\\begin{Proposition}\\quad\n\\begin{enumerate}\\itemsep=0pt\n\\item[$(i)$] The functions $A(u)$ and $B(u)$ defined by \\eqref{AB} satisfy the algebra \\eqref{sepalg} with respect to the brackets \\eqref{rmbr} in symmetric way with respect to $A(u)$ and $B(v)$ if the components of the $r$-matrix satisfy the following conditions\n\\begin{subequations}\\label{condrms}\n\\begin{gather}\\label{condrm1}\nr_{21,21}(u,v)=0, \\qquad r_{21,11}(u,v)=0, \\qquad r_{11,21}(u,v)=0,\n\\\\ \\label{condrm2}\n r_{12,12}(u,v)=0, \\qquad r_{12,22}(u,v)=0, \\qquad r_{22,12}(u,v)=0,\n\\\\ \\label{condrm3}\n r_{22,22}(u,v)=r_{11,11}(u,v).\n\\end{gather}\n\\end{subequations}\n\\item[$(ii)$] Under the conditions \\eqref{condrms} the algebra of separating functions reads as follows\n\\begin{subequations}\\label{sepalgs}\n\\begin{gather} \\label{sepalgs1}\n \\{B(u),B(v)\\}=r_{21,22}(u,v) A(u) B(v)+r_{22,21}(u,v) B(u) A(v),\n\\\\ \\label{sepalgs2}\n \\{A(u),B(v)\\}= (r_{11,22}(u,v)-r_{11,11}(u,v))B(v) A(u)+r_{12,21}(u,v) B(u) A(v),\n\\\\ \\label{sepalgs3}\n \\{A(u),A(v)\\}= r_{11,12}(u,v) A(u)B(v)+r_{12,11}(u,v) B(u)A(v).\n\\end{gather}\n\\end{subequations}\n\\end{enumerate}\n\\end{Proposition}\n\n\\begin{proof} The proposition is proven by direct calculation.\\end{proof}\n\n\\begin{Remark}\\label{remark3}There are at least two $\\mathfrak{gl}(2)\\otimes \\mathfrak{gl}(2)$ valued classical skew-symmetric $r$-matrices satisfying the condition (\\ref{condrms}):\n standard rational and standard trigonometric $r$-matrices.\n\\end{Remark}\n\nUsing the symmetry of separating algebra, one can invert the procedure described in the previous subsection and define the (quasi)canonical coordinates as follows\n \\begin{gather}\\label{p-qnonst}\nA(x_i)=0, \\qquad p_i=B(x_i).\n\\end{gather}\nIn the next section we will address, for the cases of rational and trigonometric $r$-matrices, the question whether the canonical coordinates defined by (\\ref{p-qnonst}) are separation coordinates.\n\n\\section{Classical XXX spin model}\\label{section4}\n\\subsection{Poisson brackets and Lax matrix}\\label{section4.1}\nLet us now consider the simplest possible case of the standard rational $r$-matrix\n\\begin{gather}\\label{rrm}\nr(u,v)=\\frac{\\sum\\limits_{i,j=1}^2 X_{ij}\\otimes X_{ji}}{u-v}\n\\end{gather}\nand describe the corresponding Lax matrices $L(u)$, satisfying the quadratic brac\\-kets~(\\ref{rmbr}).\n\nAs it is well-known, the Lax matrices of the spin-chain models satisfying the quadratic brackets~(\\ref{rmbr}) can be written in the following product form\n\\begin{gather}\\label{prodLraz}\nL^{(1,2,\\dots,N)}(u)= L^{(\\nu_1)}(u)\\cdots L^{(\\nu_N)}(u),\n\\end{gather}\nwhere $N$ is arbitrary and the basic one-spin matrices $L^{(i)}(u)$ are written as follows\n\\begin{gather*\nL^{(\\nu_k)}(u)=1+\\frac{1}{u-\\nu_k} \\sum\\limits_{i,j=1}^2 S^{(k)}_{ji} X_{ij}.\n\\end{gather*}\nHere $\\nu_i\\neq \\nu_j$, when $i\\neq j$, $i,j=1,\\dots,N $ and the Poisson brackets among the coordinates $S^{(n)}_{ij}$, $S^{(m)}_{kl}$ are those of the direct sum $\\mathfrak{gl}(2)^{\\oplus N}$\n\\begin{gather*\n\\big\\{{S}_{ij}^{(m)}, {S}_{kl}^{(n)}\\big\\}= \\delta^{mn}\\big(\\delta_{kj}\n{S}_{il}^{(m)}- \\delta_{il} {S}_{kj}^{(m)}\\big).\n\\end{gather*}\n\nHereafter we will be interested in the Lax matrices of the following form\n\\begin{gather}\\label{LCraz}\nL(u)= L^{(1,2,\\dots,N)}(u) C,\n\\end{gather}\nwhere $C$ is an arbitrary constant matrix $C=\\sum\\limits_{i,j=1}^2 c_{ij} X_{ij}$.\nBy the virtue of the fact that in the case of the $r$-matrix (\\ref{rrm}) \\begin{gather*}[r(u,v), C\\otimes C]=0,\\end{gather*} the Lax matrix (\\ref{LCraz}) satisfies the Poisson bracket (\\ref{rmbr}) with a rational $r$-matrix (\\ref{rrm}).\n\nThe Lax matrix (\\ref{LCraz}) is a Lax matrix of the {\\it XXX type $\\mathfrak{gl}(2)$ Heisenberg spin chain with twisted periodic boundary conditions} defined by the constant twist matrix $C$ \\cite{SklSep}.\n\n\\subsection{Integrals and Casimir functions}\\label{section4.2}\nLet us now consider the integrable system on $\\mathfrak{gl}(2)^{\\oplus N}$ defined with the help of the Lax mat\\-rix~(\\ref{LCraz}). Its mutually Poisson-commuting integrals of motion are constructed from the characteristic polynomial of the Lax matrix $L(u)$\n\\begin{gather*}I(w,u)=\\det (L(u)-w \\, {\\rm Id})=w^2-w \\operatorname{tr} L(u)+ \\det L(u).\\end{gather*}\nThe function $ \\det L(u)=\\det C \\det L^{(1,2,\\dots,N)}(u)$ is a generating function of the Casimir functions of the quadratic brackets~(\\ref{rmbr}). The corresponding Casimir functions it contains consist of different combinations of the linear and quadratic Casimir functions of $\\mathfrak{gl}(2)^{\\oplus N}$\n\\begin{gather*}\nc_{k}=S^{(k)}_{11}+ S^{(k)}_{22}, \\qquad C_{k}=S^{(k)}_{11} S^{(k)}_{22}- S^{(k)}_{12} S^{(k)}_{21}.\n\\end{gather*}\nThe generating function of integrals of motion is\n\\begin{gather*}\nI(u)= \\operatorname{tr} L(u).\n\\end{gather*}\nMore explicitly, we have that\n\\begin{gather*}\nI(u)=\\frac{1}{\\prod\\limits_{i=1}^N (\\nu_i-u)}\\left( (-1)^N(c_{11}+c_{22}) u^{N}+ \\sum\\limits_{k=1}^{N} I_k u^{N-k}\\right),\n\\end{gather*}\nwhere the Hamiltonians $I_k$ are non-homogeneous polynomials of the degree up to $k$ in the dynamical variables. In particular, we have that\n\\begin{gather*}\nI_1=-\\left( \\sum\\limits_{i,j=1}^2 c_{ij} \\sum\\limits_{k=1}^N S^{(k)}_{ij}+ \\sum\\limits_{l=1}^N \\nu_l \\sum\\limits_{i=1}^2 c_{ii}\\right),\\\\\nI_2= \\sum\\limits_{i,j=1}^2 c_{ij} \\sum\\limits_{l,m=1,\\, l 2$ a~finite\nidempotent algebra $\\m a_n$ with the following properties: The universe of $\\m\na_n$ has size $4n$ and $\\m a_n$ does not have a~minority term, but for every\nsubset $E$ of $A_n$ of size $n-1$ there is a~term of $\\m a_n$ that acts as\na~minority term on the elements of $E$.\n\nWe start our construction by fixing some odd $n > 2$ and some minority\noperation $m$ on the set $[n] = \\{1, 2, \\dots, n\\}$. To make things concrete\nwe set\n\\[\n m(x,y,z)=\\begin{cases}\n x& y=z\\\\\n y& x=z\\\\\n z& \\text{else,}\n \\end{cases}\n\\]\nbut note that any minority operation on $[n]$ will do.\n\nSince there are two nonisomorphic groups of order 4, we have two different\nnatural group operations on $\\{0,1,2,3\\}$: addition modulo~4, which we will\ndenote by `$+$' (its inverse is `$-$'), and bitwise XOR, which we denote by\n`$\\oplus$' (this operation takes bitwise XOR of the binary representations of\ninput numbers, so for example $1\\oplus 3=2$). Throughout this section, we will\nuse arithmetic modulo 4, e.g., $6x = x + x$, for all expressions except those\ninvolving indices.\n\nThe construction relies on similarities and subtle differences of the two group\nstructures, and the derived Maltsev operations, $x-y+z$ and $x\\oplus y\\oplus z$.\nBoth these operations share a congruence $\\equiv_2$ that is given by taking\nthe remainder modulo 2. We note that $x\\equiv_2 y$ if and only if $2x = 2y$.\n\n\\begin{observation}\\label{obs:maltsev-diff}\n Let $x,y,z\\in \\{0,1,2,3\\}$. Then\n\\[\n (x\\oplus y\\oplus z) - (x - y + z) \\in \\{0,2\\}\n,\\]\nand moreover the result depends only on the classes of\n $x$, $y$, and $z$ in the congruence $\\equiv_2$ (i.e., the least significant\nbinary bits of $x$, $y$, and $z$).\n\\end{observation}\n\\begin{proof}\nBoth Maltsev operations agree modulo $\\equiv_2$, hence the difference lies in\nthe $\\equiv_2$-class of 0.\n\nTo see the second part, it is enough to observe that $x\\oplus 2=x+2=x-2$ for all\n$x$. Hence changing, say $x$ to $x'=x\\oplus 2$ simply flips the most\n significant binary bit\nof both $x\\oplus y\\oplus z$ and $x - y + z$, keeping the difference the same.\n\\end{proof}\n\n\n\\begin{definition}\nLet $A_n=[n]\\times [4]$. For $i \\in [n]$, we define $t_i(x,y,z)$ to be the\nfollowing operation on $A_n$:\n\\[\n t_i((a_1,b_1), (a_2,b_2), (a_3,b_3)) =\n \\begin{cases}\n (i,b_1 - b_2 + b_3)\n \\quad\\text{if $a_1 = a_2 = a_3 = i$, and} \\\\\n (m(a_1,a_2,a_3),b_1\\oplus b_2 \\oplus b_3),\n \\quad\\text{otherwise.}\n \\end{cases}\n\\]\nThe algebra $\\m a_n$ is defined to be the algebra with universe $A_n$ and\nbasic operations $t_1,\\dots,t_n$.\n\\end{definition}\n\nBy construction, the following is true.\n\n\\begin{claim}\\label{local}\n For every $(n-1)$-element subset $E$ of $A_n$, there is a~term operation of\n $\\m a_n$ that satisfies the minority term equations when restricted to\n elements from $E$.\n\\end{claim}\n\n\\begin{proof}\n Pick $i\\in [n]$ such that no element of $E$ has its first coordinate equal to\n $i$; the operation $t_i$ is a local minority for this $E$.\n\\end{proof}\n\n\n\\begin{proposition}\\label{prop:An}\n For $n > 1$ and odd, the algebra $\\m a_n$ does not have a~minority term.\n\\end{proposition}\n\n\\begin{proof}\n Given some $(i,a)\\in A_n$, we will refer to $a$ as the \\emph{arithmetic part}\n of $(i,a)$. This is to avoid talking about `second coordinates' in\n the confusing situation when $(i,a)$ itself is a~part of a~tuple of elements\n of $A_n$.\n \n \n\n To prove the proposition, we will define a~certain subuniverse $R$ of $(\\m\n a_n)^{3n}$ and then show that $R$ is not closed under any minority operation\n on $A_n$ (applied coordinate-wise). We will write $3n$-tuples of elements of\n $A_n$ as $3n\\times 2$ matrices where the arithmetic parts of the elements\n make up the second column.\n\n Let $R \\subseteq (A_n)^{3n}$ be the set of all $3n$-tuples of the form\n \\[\n \\begin{pmatrix}\n 1&x_1\\\\ 2&x_2\\\\ \\vdots\\\\ n&x_n\\\\\n 1&x_{n+1}\\\\ 2&x_{n+2}\\\\ \\vdots\\\\ n&x_{2n}\\\\\n 1&x_{2n+1}\\\\ 2&x_{2n+2}\\\\ \\vdots\\\\ n&x_{3n}\\\\\n \\end{pmatrix}\n \\]\n such that\n \\begin{align}\n &x_{kn+1} \\equiv_2x_{kn+2} \\equiv_2 \\dots \\equiv_2 x_{kn+n},\n &\\text{for $k=0,1,2$, and} \\label{eqn:bits} \\\\\n &\\sum_{i=1}^{3n} x_i = 2.\\label{eqn:strange-sum}\n \\end{align}\n The three equations from (\\ref{eqn:bits}) mean that the least significant bits of the\n arithmetic parts of the first $n$ entries agree and similarly for the second\n and the last $n$ entries; equation (\\ref{eqn:strange-sum}) can be viewed\n as a~combined parity check on all involved bits.\n\n \\begin{claim}\n The relation $R$ is a~subuniverse of $(\\m a_n)^{3n}$.\n \\end{claim}\n \\begin{proof}\n By the symmetry of the $t_i$'s and $R$, it is enough to show that $t_1$\n preserves $R$. Let us take three arbitrary members of $R$:\n \\[\n \\begin{pmatrix}\n 1&x_{1,1}\\\\ 2&x_{1,2}\\\\ \\vdots\\\\ n&x_{1,n}\\\\\n 1&x_{1,n+1}\\\\ 2&x_{1,n+2}\\\\ \\vdots\\\\ n&x_{1,2n}\\\\\n 1&x_{1,2n+1}\\\\ 2&x_{1,2n+2}\\\\ \\vdots\\\\ n&x_{1,3n}\\\\\n \\end{pmatrix},\n \\begin{pmatrix}\n 1&x_{2,1}\\\\ 2&x_{2,2}\\\\ \\vdots\\\\ n&x_{2,n}\\\\\n 1&x_{2,n+1}\\\\ 2&x_{2,n+2}\\\\ \\vdots\\\\ n&x_{2,2n}\\\\\n 1&x_{2,2n+1}\\\\ 2&x_{2,2n+2}\\\\ \\vdots\\\\ n&x_{2,3n}\\\\\n \\end{pmatrix},\n \\begin{pmatrix}\n 1&x_{3,1}\\\\ 2&x_{3,2}\\\\ \\vdots\\\\ n&x_{3,n}\\\\\n 1&x_{3,n+1}\\\\ 2&x_{3,n+2}\\\\ \\vdots\\\\ n&x_{3,2n}\\\\\n 1&x_{3,2n+1}\\\\ 2&x_{3,2n+2}\\\\ \\vdots\\\\ n&x_{3,3n}\\\\\n \\end{pmatrix}\n \\]\n and apply $t_1$ to them to get:\n \\begin{equation}\n \\vec r =\n \\begin{pmatrix}\n 1&x_{1,1}-x_{2,1}+x_{3,1}\\\\\n 2&x_{1,2}\\oplus x_{2,2}\\oplus x_{3,2}\\\\\n &\\vdots\\\\\n n&x_{1,n}\\oplus x_{2,n}\\oplus x_{3,n}\\\\\n 1&x_{1,n+1}-x_{2,n+1}+x_{3,n+1}\\\\\n 2&x_{1,n+2}\\oplus x_{2,n+2}\\oplus x_{3,n+2}\\\\\n & \\vdots\\\\\n n&x_{1,2n}\\oplus x_{2,2n}\\oplus x_{3,2n}\\\\\n 1&x_{1,2n+1}-x_{2,2n+1}+x_{3,2n+1} \\\\\n 2&x_{1,2n+2}\\oplus x_{2,2n+2}\\oplus x_{3,2n+2}\\\\\n & \\vdots\\\\\n n&x_{1,3n}\\oplus x_{2,3n}\\oplus x_{3,3n}\\\\\n \\end{pmatrix}\n \\label{eqn:r}\n \\end{equation}\n We want to verify that $\\vec r\\in R$. First note that (\\ref{eqn:bits}) is\n satisfied: This follows from the fact that $x-y+z$ and $x\\oplus y\\oplus z$\n give the same result modulo 2, and the assumption that the original three\n tuples satisfied (\\ref{eqn:bits}).\n\n What remains is to verify the property~(\\ref{eqn:strange-sum}). If in the\n equality~(\\ref{eqn:r}) above we replace the operations $\\oplus$ by $-$ and\n $+$, verifying~(\\ref{eqn:strange-sum}) is easy: The sum of the\n arithmetic parts of such a modified tuple is\n \\begin{equation}\n \\sum_{j=1}^{3n} (x_{1,j}-x_{2,j}+x_{3,j})=\n \\sum_{j=1}^{3n}\n x_{1,j}-\\sum_{j=1}^{3n}x_{2,j}+\\sum_{j=1}^{3n}x_{3,j}=2-2+2=2.\n \\label{eqn:2}\n \\end{equation}\n This is why we need to examine the difference between the $\\oplus$-based\n and $+$-based Maltsev operations. For $k\\in \\{0,1,2\\}$ and $i\\in\n \\{1,\\dots,n\\}$ we let\n \\[\n c_{k,i} = (x_{1,kn+i} \\oplus x_{2,kn+i} \\oplus x_{3,kn+i}) -\n (x_{1,kn+i} - x_{2,kn+i} + x_{3,kn+i})\n \\]\n By the second part of Observation~\\ref{obs:maltsev-diff}, $c_{k,i}$ does not\n depend on $i$ (changing $i$ does not change\n the $x_{j,kn+i}$'s modulo $\\equiv_2$ by condition~(\\ref{eqn:bits}) in the\n definition of $R$). Hence we can write just $c_k$ instead of $c_{k,i}$.\n\n Using $c_0$, $c_1$, and\n $c_2$ to adjust for the differences between the two Maltsev operations, we can express the\n sum of the arithmetic parts of the tuple $\\vec{r}$ as\n \\[\n \\sum_{j=1}^{3n} (x_{1,j}-x_{2,j}+x_{3,j})+\\sum_{i=2}^{n}\n c_0+\\sum_{i=2}^{n} c_1+\\sum_{i=2}^{n} c_2\n = 2+(n-1)(c_0+c_1+c_2)\n \\]\n where we used~(\\ref{eqn:2}) to get the right hand side. We chose $n$ odd,\n hence $n-1$ is even and each $c_k$ is even by\n Observation~\\ref{obs:maltsev-diff}, so $(n-1)c_k=0$ for any $k=0,1,2$. We see that the sum of the\n arithmetic parts of $\\vec{r}$ is equal to 2 which concludes the proof\n of~(\\ref{eqn:strange-sum}) for the tuple~$\\vec r$ and we are done.\n \\end{proof}\n\n It is easy to see that\n \\[\n \\begin{pmatrix}\n 1&0\\\\ 2&0\\\\ \\vdots\\\\ n&0\\\\\n 1&1\\\\ 2&1\\\\ \\vdots\\\\ n&1\\\\\n 1&1\\\\ 2&1\\\\ \\vdots\\\\ n&1\\\\\n \\end{pmatrix},\n \\begin{pmatrix}\n 1&1\\\\ 2&1\\\\ \\vdots\\\\ n&1\\\\\n 1&0\\\\ 2&0\\\\ \\vdots\\\\ n&0\\\\\n 1&1\\\\ 2&1\\\\ \\vdots\\\\ n&1\\\\\n \\end{pmatrix},\n \\begin{pmatrix}\n 1&1\\\\ 2&1\\\\ \\vdots\\\\ n&1\\\\\n 1&1\\\\ 2&1\\\\ \\vdots\\\\ n&1\\\\\n 1&0\\\\ 2&0\\\\ \\vdots\\\\ n&0\\\\\n \\end{pmatrix}\\in R,\n \\quad\\text{and}\\quad\n \\begin{pmatrix}\n 1&0\\\\ 2&0\\\\ \\vdots\\\\ n&0\\\\\n 1&0\\\\ 2&0\\\\ \\vdots\\\\ n&0\\\\\n 1&0\\\\ 2&0\\\\ \\vdots\\\\ n&0\\\\\n \\end{pmatrix}\\notin R.\n \\]\n However, the last tuple can be obtained from the first three by applying any\n minority operation on the set $A_n$ coordinate-wise. From this we conclude\n that $\\m a_n$ does not have a~minority term.\n\\end{proof}\n\nWe note that the above construction of $\\m a_n$ makes sense for $n$ even as well\nand claim that these algebras also have the same key features, namely, by\nconstruction, they have plenty of `local' minority term operations but they do\nnot have minority terms. The verification of this last fact for $n$ even is\nsimilar, but slightly more technical than for $n$ odd, and we omit the proof here.\n\nThe algebras $\\m a_n$ can also be used to witness that having a~lot of local\nminority-majority terms does not guarantee the presence of an actual\nminority-majority term. By padding with dummy variables, any local minority\nterm of an algebra $\\m a_n$ is also a~term that locally satisfies the\nminority-majority term equations. But since each $\\m a_n$ has a~Maltsev term\nbut not a~minority term, then by Theorem~\\ref{thm:join} it follows that $\\m\na_n$ cannot have a~minority-majority term.\n\n\\section{Deciding minority in idempotent algebras is in \\compNP}\\label{sec:np}\n\nThe results from the previous section imply that one cannot base an efficient\ntest for the presence of a~minority term in a~finite idempotent algebra on\nchecking if it has enough local minority terms. This does not rule out that\nthe problem is in the class \\compP, but to date no other approach to showing\nthis has worked. As an intermediate result, we show, at least, that this\ndecision problem is in \\compNP{} and so cannot be \\compEXPTIME-complete (unless\n$\\compNP=\\compEXPTIME$).\n\n\n\nWe first show that an instance $\\m a$ of the decision problem \\minority\\ can\nbe expressed as a~particular instance of the subpower membership problem for\n${\\m a}$.\n\n\\begin{definition}\\label{defSMP}\nGiven a~finite algebra $\\m a$, the \\emph{subpower membership problem} for $\\m\na$, denoted by $\\smp(\\m a)$, is the following decision problem:\n\\begin{itemize}\n \\item INPUT: $\\vec a_1, \\dots, \\vec a_k, \\vec b \\in A^n$\n \\item QUESTION: Is $\\vec b$ in the subalgebra of $\\m a^n$ generated by\n $\\{\\vec a_1, \\dots, \\vec a_k\\}$?\n\\end{itemize}\n\\end{definition}\n\nTo build an instance of $\\smp(\\m a)$ expressing that $\\m a$ has a~minority\nterm, let $I =\\{(a,b,c)\\mid \\mbox{$a, b, c \\in A$ and $|\\{a,b,c\\}| \\le 2$}\\}$.\nSo $|I| = 3|A|^2 - 2|A|$. For $(a,b,c) \\in I$, let $\\min(a,b,c)$ be the minority\nelement of this triple. So\n\\[\n \\min(a,b,b) = \\min(b,a,b) = \\min(b,b,a) = \\min(a,a,a) = a.\n\\]\nFor $1 \\le i \\le 3$, let $\\vec \\pi_i \\in A^I$ be defined by $\\vec \\pi_i(a_1,\na_2, a_3) = a_i$ and define $\\vec \\mu_A \\in A^I$ by $\\vec \\mu_A(a_1, a_2, a_3)\n= \\min(a_1, a_2, a_3)$, for all $(a_1, a_2, a_3) \\in I$. Denote the instance\n$\\vec \\pi_1$, $\\vec \\pi_2$, $\\vec \\pi_3$, and $\\vec \\mu_A$ of $\\smp(\\m a)$ by\n$\\min(\\m a)$.\n\n\\begin{proposition}\\label{min-instance}\n An algebra $\\m a$ has a~minority term if and only if $\\vec \\mu_A$ is\n a~member of the subpower of $\\m a^I$ generated by $\\{\\vec \\pi_1, \\vec \\pi_2,\n \\vec \\pi_3\\}$, i.e., if and only if $\\min(\\m a)$ is a~`yes' instance of\n $\\smp(\\m a)$ when $\\m a$ is finite.\n\\end{proposition}\n\n\\begin{proof}\n If $m(x,y,z)$ is a~minority term for $\\m a$, then applying $m$\n coordinatewise to the generators $\\vec \\pi_1$, $\\vec \\pi_2$, $\\vec \\pi_3$\n will produce the element $\\vec \\mu_A$. Conversely, any term that produces\n $\\vec \\mu_A$ from these generators will be a~minority term for $\\m a$.\n\\end{proof}\n\nExamining the definition of $\\min(\\m a)$, we see that the parameters from\nDefinition~\\ref{defSMP} are $k=3$ and $n=3|A|^2-2|A|$, which is (for algebras\nwith at least one at least unary basic operation) polynomial in $\\|\\m A\\|$.\nFor $\\m a$ idempotent, we can in fact improve $n$ to $3|A|^2-3|A|$,\nsince then we do not need to include in $I$ entries of the form $(a,a,a)$.\n\nIn general, it is known that for some finite algebras the subpower membership problem can be\n\\compEXPTIME-complete~\\cite{Kozik2008} and that for some others, e.g., for any\nalgebra that has only trivial or constant basic operations, it lies in the\nclass \\compP. In~\\cite{Mayr2012}, P.\\ Mayr shows that when $\\m a$ has a~Maltsev\nterm, then $\\smp(\\m a)$ is in \\compNP. We claim that a careful reading of Mayr's proof reveals that in fact the following uniform version of the subpower membership problem, where the algebra $\\m a$ is considered as part of the input, is also in \\compNP.\n\n\n\\begin{definition}\nDefine \\smpun\\ to be the following decision problem:\n\\begin{itemize}\n \\item INPUT: A~list of tables of basic operations of an algebra~$\\m A$ that includes a~Maltsev operation, and $\\vec a_1, \\dots, \\vec a_k, \\vec b \\in A^n$.\n \\item QUESTION: Is $\\vec b$ in the subalgebra of $\\m a^n$ generated by\n $\\{\\vec a_1, \\dots, \\vec a_k\\}$?\n\\end{itemize}\n\\end{definition}\nWe base the main result of this section\non the following.\n\\begin{theorem}[see \\cite{Mayr2012}]\\label{smpun}\n The decision problem \\smpun\\ is in the class \\compNP.\n\\end{theorem}\n\nWhile this theorem is not explicitly stated in \\cite{Mayr2012}, it can be seen\nthat the runtime of the verifier that Mayr constructs for the problem $\\smp(\\m\na)$, when $\\m a$ has a Maltsev term, has polynomial dependence on the size of\n$\\m a$ in addition to the size of the input to $\\smp(\\m a)$. We stress that\nMayr's verifier requires that the table for a Maltsev term of $\\m a$ is given as part of\nthe description of $\\m a$.\n\n\n\n\n\n\n\n\n\\begin{theorem}\\label{NP} The decision problem \\minority\\ is in the class \\compNP.\n\\end{theorem}\n\n\\begin{proof}\nTo prove this theorem, we provide a polynomial reduction $f$ of \\minority\\ to \\smpun. By Theorem~\\ref{smpun}, this will suffice.\nLet $\\m a$ be an instance of \\minority, i.e., a~finite\n idempotent algebra that has at least one operation.\n \n \n\n We first check, using the polynomial-time\n algorithm from Corollary~\\ref{maltsevterm}, to see if $\\m a$ has a~Maltsev\n term. If it does not, then $\\m a$ will not have a~minority term, and in this case we\n set $f(\\m a)$ to be some fixed `no' instance of \\smpun. Otherwise, we augment the list of basic operations of $\\m a$ by\n adding the Maltsev operation on $A$ that the algorithm produced. Denote\n the resulting (idempotent) algebra by $\\m a'$ and note that $\\m a'$ can be constructed from $\\m a$ by a polynomial-time algorithm.\n Also, note that $\\m a'$ is term equivalent to\n $\\m a$ and so the subpower membership problem is the same for both\n algebras.\n\n If we set $f(\\m a)$ to be the instance of \\smpun\\ that consists of the~list of tables of basic operations of~$\\m A'$ along with\n $\\min(\\m a)$ then we have, by Proposition~\\ref{min-instance}, that $f(\\m a)$ is a `yes' instance of \\smpun\\ if and only if $\\m a$ has a minority term. Since the construction of $f(\\m a)$ can be carried out by a procedure whose runtime can be bounded by a polynomial in $\\|\\m a\\|$, we have produced a polynomial reduction of \\minority\\ to \\smpun, as required.\n \\end{proof}\n\n\n\n\n\n\n\\section{Conclusion}\n\nWhile Theorem~\\ref{NP} establishes that testing for a~minority term for finite\nidempotent algebras is not as hard as it could be, the true complexity of this\ndecision problem is still open. Our proof of this theorem closely ties the\ncomplexity of {\\minority} to the complexity of the subpower membership problem\nfor finite Maltsev algebras and specifically to the problem \\smpun. Thus any progress on determining the complexity of\n$\\smp(\\m a)$ for finite Maltsev algebras may have a~bearing on the complexity\nof {\\minority}.\nThere has certainly been progress on the algorithmic side of $\\smp$;\na~major recent paper has shown in particular that $\\smp(\\m a)$ is tractable for\n$\\m a$ with cube term operations (of which a Maltsev term operation is\na~special case) as long as $\\m a$ generates a residually small\nvariety~\\cite{BMS18} (the statement from the paper is actually stronger than\nthis, allowing multiple algebras in place of $\\m a$).\n\nIn Section~\\ref{join} we introduced the notion of a~minority-majority term and\nshowed that if testing for such a~term for finite idempotent algebras could be\ndone by a~polynomial-time algorithm, then \\minority\\ would lie in the\ncomplexity class \\compP. This is why we conclude our paper with a~question\nabout deciding minority-majority terms.\n\n\\begin{open-problem*}\n What is the complexity of deciding if a~finite idempotent algebra has\n a~minority-majority term?\n\\end{open-problem*}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{ Introduction}\nThe discovery of graphene, one layer of carbon atoms, arose high hopes for its application in future of\nelectronic devices~\\cite{Neto-RMP}. One reason for this attention is high mobility of charge carriers (electrons or holes)\nwhose density are well controlled by electrical and chemical doping. \nAlthough the application of pure graphene in electronic industry didn't develop because of the gap-less dispersion.\nAdding atoms and doping in graphene $e. g.$ fluorinated mono-vacancies~\\cite{Kaloni-EPL-100},\nGe-interrelated~\\cite{Kaloni-EPL-99} have been studied for investigation the electrical \ncharacteristics of graphene under impurity doping, which gives rise to exploring new interesting electronic and magnetic properties. \n\n The prediction of existence the neutral triplet collective\nmode in a honeycomb lattice is considered to be the new idea~\\cite{Baskaran-Jafari}.\nThis collective mode was predicted by considering random phase approximation (RPA) method in Hubbard model on \nthe honeycomb lattice $e. g.$ undoped graphene and graphite. By considering this collective mode,\ngood agreement between experimental and theoretical results in TRPES experiment on graphite by Momentum Average (MA) \nmethod was achieved~\\cite{Ebrahimkhas-PRB}. In this approximation, neutral triplet spin-1 mode was considered as coupling\nbetween two spinon bound states in undoped graphene but in scalar form not in $2\\times2$ \nmatrix~\\cite{Jafari-Baskaran},~\\cite{Baskaran-Jafari}. \nIf we study and recheck the calculations of ref.~\\cite{{Jafari-Baskaran}, {Ebrahimkhas-IJMPB}},\nwe will find, first: some parts of the calculations between two sub-lattice\nare eliminated and second: effects of overlap factors are neglected~\\cite{{Peres-comment},{Jafari-recomment}}.\nThe innovation of ref.~\\cite{Jafari-Baskaran} was considering scalar form for spin susceptibility, instead of $2\\times 2$ tensor\nfor two sub-lattice material.\nThe essential purpose of this research is analyzing the validity of spin-1 collective mode without one-cone approximation\nin Hubbard model and its novelty is calculating the effects of effective long range \ninteraction $(U^{eff}_{l-r})$ in RPA method on neutral triplet collective mode~\\cite{JPHCM.Jafari}.\nWhy we considered this form for long range interaction?\nBecause by considering only this interaction type\nwe will obtain Eq.~\\ref{main.eq} which is an essential equation in finding neutral collective mode in RPA formulas\n(without one-cone approximation), in addition we should consider those types of interaction,\nacting between electrons with opposite spins. \nThe effects of Coulomb interaction is known to be in the form $e^{2}\/(\\epsilon r)$ which lead to prediction\nof plasmon dispersion in graphene and has been discussed in details in ref.~\\cite{Wunsch}.\nThe coulomb interaction in the honey comb lattice didn't result spin-1 collective mode.\n\n In the first section of this paper, the effects of short range Hubbard interaction in RPA method on honeycomb\nlattice was calculated exactly without neglecting any interaction terms~\\cite{Peres-comment}, in two\nrepresentations. In the second section, we first introduced new effective long range\ninteraction as a new model in graphene and then we used this model for studying the neutral triplet collective\nmode. finally the finding of these calculations are summarized and discussed on validation of neutral collective mode\nin undoped graphene.\n\n\n\\section{ On-site interaction: sub lattice representation}\n\n The Hubbard Hamiltonian consist of two terms: tight binding (TB) and short range Hubbard interaction (HI),\n \n \\begin{eqnarray}\n H&=&H_{TB}+H_{HI} \n =-t\\sum_{,\\sigma}(a_{i}^{\\dagger \\sigma}b_{j}^{\\sigma}+b_{j}^{\\dagger \\sigma}a_{i}^{\\sigma})\\nonumber\\\\\n &+& U\\sum_{i}(a_{i}^{\\dagger \\uparrow}a_{i}^{\\uparrow}a_{i}^{\\dagger \\downarrow}a_{i}^{\\downarrow}+\n b_{i}^{\\dagger \\uparrow}b_{i}^{\\uparrow}b_{i}^{\\dagger \\downarrow}b_{i}^{\\downarrow}),\\\\\n \\label{TB.eq}\\nonumber\n \\end{eqnarray}\n \n where $t$ is the nearest neighbor hopping. $a^{\\sigma}_{i},b^{\\sigma}_{i},(a^{\\dagger \\sigma}_{i},b^{\\dagger \\sigma}_{i})$\n are annihilation (creation) operators for electrons in sub lattices $A, B$ respectively, and $\\sigma$ stands for spin of electrons,\n and $U$ is the Hubbard on-site interaction potential. The Hubbard interaction term can be written in the exchange channel,\n \n \\begin{equation}\n H_{HI}= -U\\sum_{i}(a_{i}^{\\dagger \\uparrow}a_{i}^{\\downarrow}a_{i}^{\\dagger \\downarrow}a_{i}^{\\uparrow}+\n b_{i}^{\\dagger \\uparrow}b_{i}^{\\downarrow}b_{i}^{\\dagger \\downarrow}b_{i}^{\\uparrow})\n \\label{exHI.eq}\n \\end{equation}\n \n \n\n The interaction term is represented by Feynman diagrams with two kinds of vertices.\n Each vertex corresponds to the on-site repulsion interaction on the $A$, $B$ sub-lattices Fig.~\\ref{vertices.fig}.\n In each vertex electrons with opposite spins interact via repulsion potential~\\cite{BF}.\n Fig.(\\ref{vertices.fig}-a) shows creation (annihilation) of electrons with up and down spin in each vertex\n either in $A$ or $B$ sub-lattice. In Fig.(~\\ref{vertices.fig}-b), by adding vertices of \n each sub-lattice, RPA chain is formed. In this chain the vertex with the same or different sub-lattice put to gather,\n so we define polarization operators, $\\chi_{AA}, \\chi_{AB}$ which include effects of \n overlap factors.\n \n \n \\begin{figure}[h]\n\\begin{center}\n\\vspace{-3mm}\n\\includegraphics[width=8cm,height=3cm,angle=0]{vertices1.eps}\n\\vspace{-3mm}\n\\caption{(a) Each vertex includes creation(destruction) of an electron with up (down) spin. These vertices show\ninteraction between two electrons with opposite spin in same atoms in sub-lattices by short range repulsive potential $U$. This type of\ninteraction is shown in Eq.~\\ref{exHI.eq}, in each sub-lattice $A(B)$ electrons with up(down) spin interact to electrons with down(up) spin.\n(b) The RPA chain includes summing of $A$, $B$ vertices. This figure is one of the $RPA$ chains.}\n \\label{vertices.fig} \n\\end{center}\n\\vspace{-3mm}\n \\end{figure}\n \n Each vertex in the chain can be either of $A$ or $B$ type. By summing the RPA series, one can achieve to the\n collection of equations for susceptibilities in the sub-lattice representation~\\cite{Peres-comment},\n \n \\begin{equation}\n\\begin{cases}\n \\chi_{AA}=\\chi^{0}_{AA}+U\\chi^{0}_{AA}\\chi_{AA}+U\\chi^{0}_{AB}\\chi_{BA}\\\\\n \\chi_{AB}=\\chi^{0}_{AB}+U\\chi^{0}_{AA}\\chi_{AB}+U\\chi^{0}_{AB}\\chi_{BB}\\\\\n \\chi_{BA}=\\chi^{0}_{BA}+U\\chi^{0}_{BA}\\chi_{AA}+U\\chi^{0}_{BB}\\chi_{BA}\\\\\n \\chi_{BB}=\\chi^{0}_{BB}+U\\chi^{0}_{BA}\\chi_{AB}+U\\chi^{0}_{BB}\\chi_{BB}\\\\\n\\end{cases}.\n\\label{coupling.eq}\n\\end{equation}\n\n The coupling equations in Eq.~\\ref{coupling.eq} should be decoupled, so giving two subsystems with \n the common determinant, which should be zero at the plasmon dispersion~\\cite{Peres-comment}:\n \n \\begin{equation}\n \\begin{vmatrix} \n 1-U\\chi^{0}_{AA} & -U\\chi^{0}_{AB}\\\\ \n -U\\chi^{0}_{BA} & 1-U\\chi^{0}_{BB}\n \\end{vmatrix}=0,\n \\label{determin1.eq}\n\\end{equation}\n \n or \n \n \\begin{equation}\n 1-U(\\chi^{0}_{AA}+\\chi^{0}_{BB})+U^{2}\\chi^{0}_{AA}\\chi^{0}_{BB}-U^2 \\chi^{0}_{AB}\\chi^{0}_{BA}=0.\n \\label{determin2.eq}\n \\end{equation}\n \n \n The bare polarization operators or susceptibilities of vertices can be calculated using non-interacting\n Green's function for graphene's electrons in the sub-lattice representation. By considering the Green's functions\n in the momentum representation and close to Dirac points, classification of Green's function\n by electron valleys $\\vec{K}$ and $-\\vec{K}$, by vicinities to the corresponding Dirac points was achieved. In the valleys\n $\\vec{K}$ and $-\\vec{K}$ the non-interacting Hamiltonians in the representation of sub lattice $A, B$ are\n\n \\begin{equation}\n H_{\\pm \\vec{K}}=\\begin{pmatrix} \n 0 & \\pm p_{x}-ip_{y}\\\\ \n \\pm p_{x}+ip_{y} & 0\n \\end{pmatrix},\n \\label{nonintH.eq}\n\\end{equation}\n\n here by considering $v_{F}=1$, the corresponding Matsubara Green's functions for the undoped graphene ($\\mu=0$) are\n\n\\begin{equation}\n G^{\\pm \\vec{K}}(\\vec{p},i\\epsilon)=\\frac{1}{(i\\epsilon)^2-p^2}\\begin{pmatrix} \n i\\epsilon & \\pm pe^{\\mp i\\phi}\\\\ \n \\pm pe^{\\pm i\\phi} & i\\epsilon\n \\end{pmatrix},\n \\label{MG.eq}\n\\end{equation}\n\n where $\\phi$ is the azimuthal angle of the momentum vector $\\vec{p}$.\nBy using Eq.~\\ref{MG.eq}, the polarization operators can be calculated as\n\n\\begin{equation}\n\\chi^{0}_{\\alpha,\\beta}(\\vec{q},i\\omega)=-T\\sum_{v=\\pm \\vec{K}}\\sum_{\\epsilon} \\int \\frac{d\\vec{p}}{(2\\pi)^2}\nG^{v}_{\\alpha, \\beta}(\\vec{p},i\\epsilon)G^{-v}_{\\beta, \\alpha}(\\vec{p'},i\\epsilon+i\\omega),\n\\label{chi-RPA.eq}\n\\end{equation}\n\n where $\\vec{p'}=\\vec{p}+\\vec{q}$. By performing the summation over fermionic Matsubara frequencies\n $\\epsilon=\\pi T(2n+1)$ and electron valleys $v=\\pm\\vec{K}$,\n \n \\begin{eqnarray}\n \\chi^{0}_{AA}(\\vec{q}, i\\omega)=\\chi^{0}_{BB}(\\vec{q}, i\\omega)=-\\int{\\frac{d\\vec{p}}{(2\\pi)^2}\\frac{p+p'}{(i\\omega)^2-(p+p')^2}},\\nonumber\\\\\n \\chi^{0}_{AB}(\\vec{q}, i\\omega)=\\chi^{0}_{BA}(\\vec{q}, i\\omega)=\n \\int{\\frac{d\\vec{p}}{(2\\pi)^2}\\frac{cos(\\phi-\\phi')(p+p')}{(i\\omega)^2-(p+p')^2}}.\\nonumber\\\\\n \\label{chi-sublattices.eq}\n \\end{eqnarray} \n \n Our results in Eq.~\\ref{determin1.eq} and Eq.~\\ref{chi-sublattices.eq} are in agreement with results of ref.~\\cite{Peres-comment}\n , the only difference is that the decomposition of the overlap factor $cos(\\phi-\\phi')$ near the Dirac point was performed in \n Eq.~\\ref{chi-sublattices.eq}. Now by using Eq.~\\ref{determin1.eq} and Eq.~\\ref{chi-sublattices.eq} we should find zeros of \n Eq.~\\ref{determin2.eq}, but we didn't obtain zeros or triplet collective excitation for reasonable range of $U$. In other words,\n the short-range interaction dose not lead to any neutral collective mode. \n \n\\section{On-site interaction: eigenvector representation}\n\n In this section the same problem of RPA in the triplet channel of the on-site interaction will be considered~\\cite{note},\n but in the representation of the eigenvectors of the Hamiltonians Eq.~\\ref{nonintH.eq}. For the case\n of electron valleys $\\vec{K}$ and $-\\vec{K}$, the eigenvectors are\n \n \\begin{equation}\n f^{\\vec{K}}_{s} = \\frac{1}{\\sqrt{2}} \\begin{pmatrix} \n e^{-i\\phi\/2}\n \\\\ se^{i\\phi\/2}\n \\end{pmatrix}, f^{-\\vec{K}}_{s} = \\frac{1}{\\sqrt{2}} \\begin{pmatrix} -se^{i\\phi\/2}\n \\\\ e^{i\\phi\/2} \\end{pmatrix} ,\n \\label{eignvector2.eq}\n\\end{equation}\n\n where $f^{\\pm \\vec{K}}$ corresponds to electron states in the conduction band with the energy $v_{F}p$ and in the valance band\n with the energy $-v_{F}p$. The interaction Hamiltonian Eq.~\\ref{exHI.eq} in the momentum representation takes\n this form:\n \n \\begin{eqnarray}\n H_{int}=-\\frac{U}{N}\\sum_{\\vec{p}_{1} \\vec{p}_{2} \\vec{q}} \\left\\{ \\Psi^{\\uparrow \\dagger}_{\\vec{p'}_{1}}\n \\begin{pmatrix} 1 & 0 \\\\ 0 & 0 \\end{pmatrix} \\Psi^{\\downarrow}_{\\vec{p}_{1}} \\Psi^{\\downarrow \\dagger}_{\\vec{p'}_{2}}\n \\begin{pmatrix} 1 & 0 \\\\ 0 & 0 \\end{pmatrix} \\Psi^{\\uparrow}_{\\vec{p}_{2}} \\right. \\nonumber\\\\\n \\left. + \\Psi^{\\uparrow \\dagger}_{\\vec{p'}_{1}}\n \\begin{pmatrix} 0 & 0 \\\\ 0 & 1 \\end{pmatrix} \\Psi^{\\downarrow}_{\\vec{p}_{1}} \\Psi^{\\downarrow \\dagger}_{\\vec{p'}_{2}}\n \\begin{pmatrix} 0 & 0 \\\\ 0 & 1 \\end{pmatrix} \\Psi^{\\uparrow}_{\\vec{p}_{2}} \\right \\},\n \\label{sum_psi1.eq}\n \\end{eqnarray}\n \n where $\\vec{p'}_{1}=\\vec{p}_{1}+\\vec{q}$, $\\vec{p'}_{2}=\\vec{p}_{2}-\\vec{q}$, and \n $\\Psi^{\\uparrow \\downarrow}_{\\vec{p}}=(a^{\\uparrow \\downarrow}_{\\vec{p}}, b^{\\uparrow \\downarrow}_{\\vec{p}})^T$\n is a two-components column matrix, was made from electron destruction operators. The sum in Eq.~\\ref{sum_psi1.eq} over\n $\\vec{p}_{1}, \\vec{p}_{2}, \\vec{q}$ should be distributed among electron valleys. Retaining only the terms corresponding\n to small $\\vec{q}$ (since we search small-momentum collective excitation), we get two intra-valley and two inter-valley terms\n\n\\begin{eqnarray}\n H_{int}=-\\frac{U}{N}\\sum_{\\vec{p}_{1} \\vec{p}_{2} \\vec{q}} \\sum_{\\vec{v}_{1},\\vec{v}_{2}}\n \\left\\{ \\Psi^{\\uparrow \\dagger}_{\\vec{v'}_{1}}\n \\begin{pmatrix} 1 & 0 \\\\ 0 & 0 \\end{pmatrix} \\Psi^{\\downarrow}_{\\vec{v}_{1}} \\Psi^{\\downarrow \\dagger}_{\\vec{v'}_{2}}\n \\begin{pmatrix} 1 & 0 \\\\ 0 & 0 \\end{pmatrix} \\Psi^{\\uparrow}_{\\vec{v}_{2}} \\right. \\nonumber\\\\\n \\left. + \\Psi^{\\uparrow \\dagger}_{\\vec{v'}_{1}}\n \\begin{pmatrix} 0 & 0 \\\\ 0 & 1 \\end{pmatrix} \\Psi^{\\downarrow}_{\\vec{v}_{1}} \\Psi^{\\downarrow \\dagger}_{\\vec{v'}_{2}}\n \\begin{pmatrix} 0 & 0 \\\\ 0 & 1 \\end{pmatrix} \\Psi^{\\uparrow}_{\\vec{v}_{2}} \\right \\},\n \\label{sum_psi2.eq}\n \\end{eqnarray}\n\n where $\\vec{v}_{\\alpha}=\\vec{V}_{\\alpha}+\\vec{p}_{\\alpha}, \\vec{v'}_{\\alpha}=\\vec{V}_{\\alpha}+\\vec{p'}_{\\alpha}$ and \n $\\vec{V}_{\\alpha}=\\pm \\vec{K}$. Now by changing the basis of the eigenvectors, the Eq.~\\ref{eignvector2.eq}\n can be performed according to the relation $\\Psi^{\\uparrow \\downarrow}_{\\vec{V}+\\vec{p}}=\n \\sum_{s=\\pm}f^{\\vec{V}}_{s}c^{\\uparrow \\downarrow}_{\\vec{V}+\\vec{p},s}$, where $c^{\\uparrow \\downarrow}_{\\vec{V}+\\vec{p},s}$ is the\n destruction operator for electron from the valley $\\vec{V}=\\pm \\vec{K}$, from the conduction $(s=+1)$ or\n valance $(s=-1)$ band, with the momentum $\\vec{p}$ (measured from the corresponding Dirac point) and\n with up\/down spin. The result for such transformation of Eq.~\\ref{sum_psi2.eq} is,\n \n \\begin{eqnarray}\n H_{int}=\\sum_{\\vec{p}_{1}\\vec{p}_{2}\\vec{q}}\\sum_{\\vec{V}_{1}\\vec{V}_{2}=\\pm\\vec{K}}\\sum_{s'_1,s'_2,s_1,s_2=\\pm} \\left \\{\n \\Gamma^{(1)}_{\\vec{V}_{1}\\vec{V}_{2}}+\\Gamma^{(2)}_{\\vec{V}_{1}\\vec{V}_{2}} \\right \\} \\nonumber \\\\\n \\times c^{\\uparrow \\dagger}_{\\vec{V}_{1}+\\vec{p'}_{1},s'_1} c^{\\downarrow }_{\\vec{V}_{1}+\\vec{p}_{1},s_1}\n c^{\\downarrow \\dagger}_{\\vec{V}_{2}+\\vec{p'}_{2},s'_2} c^{\\uparrow }_{\\vec{V}_{2}+\\vec{p'}_{2},s_2}\n \\label{H_int_new.eq}\n \\end{eqnarray}\n \n where the vertices are,\n \n \\begin{eqnarray}\n \\Gamma^{(1)}_{++}&=&\\Gamma^{(2)}_{--}=\\frac{u}{4}e^{\\frac{i}{2}(-\\phi_{1}+\\phi'_{1}-\\phi_{2}+\\phi'_{2})},\\nonumber \\\\\n \\Gamma^{(2)}_{++}&=&\\Gamma^{(1)}_{--}=\\frac{u}{4}s_{1}s'_{1}s_{2}s'_{2}e^{\\frac{i}{2}(\\phi_{1}-\\phi'_{1}+\\phi_{2}-\\phi'_{2})}, \\nonumber\\\\\n \\Gamma^{(1)}_{+-}&=&\\Gamma^{(2)}_{-+}=\\frac{u}{4}s_{2}s'_{2}e^{\\frac{i}{2}(-\\phi_{1}+\\phi'_{1}+\\phi_{2}-\\phi'_{2})},\\nonumber \\\\\n \\Gamma^{(2)}_{+-}&=&\\Gamma^{(1)}_{-+}=\\frac{u}{4}s_{1}s'_{1}e^{\\frac{i}{2}(\\phi_{1}-\\phi'_{1}-\\phi_{2}+\\phi'_{2})}\n \\label{vertices4.eq}\n \\end{eqnarray}\n \n \n where $u=U\/N$. The interaction Hamiltonian Eq.~\\ref{H_int_new.eq} gives rise to RPA series of the type\n depicted in Fig.~\\ref{vertices2.fig}. A vertex in each place of these series can be of type $\\Gamma^{(1,2)}$,\n while electron valleys in the Green function between vertices can be $\\vec{K}$ or $-\\vec{K}$. \n \n \\begin{figure}[h]\n\\begin{center}\n\\vspace{-3mm}\n\\includegraphics[width=6cm,height=2cm,angle=0]{vertices2.eps}\n\\vspace{-3mm}\n\\caption{ Each vertex is including $\\Gamma^{(1)}$ or $\\Gamma^{(2)}$. Each vertex is include interaction between\nelectrons of valance or conduction in each valley. In this chain I considered many types of $\\Gamma$ functions\nwhich described interaction of electrons in each valley. This chain is one of the $RPA$ chains.}\n \\label{vertices2.fig}\n\\end{center}\n\\vspace{-3mm}\n\\end{figure}\n \n \n The full vertex can be renormalized by RPA summation, $\\Gamma^{(i,j)}_{\\vec{V}_{1}\\vec{V}_{2}}$, where $i,j=1,2$ are the types of\n left and right vertices in the series. The Bethe-Sal peter equations for this vertex is,\n \n \\begin{eqnarray}\n &&\\Gamma^{(ij)}_{\\vec{V}_{1}\\vec{V}_{2}}(\\vec{p'}_{1},s'_{1},\\vec{p}_{1},s_{1},\\vec{p'}_{2},s'_{2}, \\vec{p}_{2},s_{2};i\\omega)= \\nonumber\\\\\n &&\\delta_{ij}\\Gamma^{(i)}_{\\vec{V}_{1}\\vec{V}_{2}}(\\vec{p'}_{1},s'_{1},\\vec{p}_{1},s_{1},\\vec{p'}_{2},s'_{2}, \\vec{p}_{2},s_{2}) \\nonumber \\\\\n &&-T\\sum_{k\\vec{V}}\\sum_{\\epsilon ss'}\\int \\frac{d\\vec{p}}{(2\\pi)^2}\\Gamma^{(i)}_{\\vec{V}_{1}\\vec{V}}(\\vec{p'}_{1},s'_{1},\\vec{p}_{1},s_{1},\\vec{p'},s', \\vec{p},s) \\nonumber\\\\\n &&\\times G^{\\vec{V}}_{s}(\\vec{p},i\\epsilon)G^{\\vec{V}}_{s'}(\\vec{p'},i\\epsilon+i\\omega)\n \\Gamma^{(kj)}_{\\vec{V}\\vec{V}_{2}}(\\vec{p},s,\\vec{p'},s',\\vec{p'}_{2},s'_{2}, \\vec{p}_{2},s_{2};i\\omega),\\nonumber\\\\\n \\label{new-vertex1.eq} \n \\end{eqnarray}\n \n where $\\vec{p'}=\\vec{p}+\\vec{q}$. The Green functions for eigenstates of the Hamiltonians Eq.~\\ref{nonintH.eq} at\n $\\mu=0$ have a simplest form, \n \n \\begin{equation}\n G^{\\pm \\vec{K}}_{-}(\\vec{p}, i\\epsilon)=\\frac{1}{i\\epsilon-p},~~~~~ G^{\\pm \\vec{K}}_{+}(\\vec{p}, i\\epsilon)=\\frac{1}{i\\epsilon+p}\n \\label{greenf-ne.eq}\n \\end{equation}\n \n By further processing, I should note that the vertices Eq.~\\ref{vertices4.eq} can be decoupled in to $left$ and $right$ parts,\n \n \\begin{equation}\n \\Gamma^{(i)}_{\\vec{V}_{1}\\vec{V}_{2}}(\\vec{p'}_{1},s'_{1},\\vec{p}_{1},s_{1},\\vec{p'}_{2},s'_{2}, \\vec{p}_{2},s_{2})=\n \\Gamma^{i.\\vec{V}_{1}}_{\\vec{p'}_{1}s'_{1}\\vec{p}_{1}s_{1}} . u . \\Gamma^{i.\\vec{V}_{2}}_{\\vec{p'}_{2}s'_{2}\\vec{p}_{2}s_{2}},\n \\label{l-r-gamma.eq}\n \\end{equation}\n \n where \n \n \\begin{equation}\n \\gamma^{a}_{\\vec{p'}s'\\vec{p}s}=\\frac{1}{2}e^{\\frac{i}{2}(-\\phi+\\phi')}, ~~~~~ \\gamma^{b}_{\\vec{p'}s'\\vec{p}s}=\n \\frac{1}{2}ss' e^{\\frac{i}{2}(\\phi-\\phi')},\n \\label{new-gamma.eq}\n \\end{equation}\n \n here the brief notations $1.\\vec{K}=2.(-\\vec{K})=a$ and $2.\\vec{K}=1.(-\\vec{K})=b$ are introduced. Since all bare vertices are decoupled,\n the summarized vertices also can be decoupled into the left and right parts. The RPA-renormalized interaction\n is written, $\\Gamma^{ij}_{\\vec{V}_{1}\\vec{V}_{2}}=\\Gamma^{i.\\vec{V}_{1}}V_{ij}\\Gamma^{i.\\vec{V}_{2}}$. By substituting\n it into Eq.~\\ref{new-vertex1.eq}, the interaction equations for this system obtained as,\n \n \\begin{equation}\n V_{ij}=u+u\\sum_{k\\vec{V}}\\Pi^{\\vec{V}}_{ik}V_{kj},\n \\label{int-term.eq}\n \\end{equation}\n \n the polarization operators were introduced as,\n \n \\begin{eqnarray}\n &&\\Pi^{\\vec{V}}_{ij}(\\vec{q},i\\omega)=\\nonumber\\\\\n &&-T\\sum_{\\epsilon ss'}\\int\\frac{d\\vec{p}}{(2\\pi)2}\\Gamma^{i.\\vec{V}}_{\\vec{p'}s'\\vec{p}s}G^{\\vec{V}}_{s}(\\vec{p},i\\epsilon)\n \\Gamma^{i.\\vec{V}}_{\\vec{p}s\\vec{p'}s'}G^{\\vec{V}}_{s'}(\\vec{p'},i\\epsilon+i\\omega).\\nonumber\\\\\n \\label{pol.eq}\n \\end{eqnarray}\n \n \n A nontrivial solution for the Eq.~\\ref{int-term.eq} occurs when it's determinant is zero,\n \n \\begin{eqnarray}\n \\begin{vmatrix} \n 1-u\\Pi^{\\vec{K}}_{11}-u\\Pi^{-\\vec{K}}_{11} & -u\\Pi^{\\vec{K}}_{12}-u\\Pi^{-\\vec{K}}_{12}\\\\ \n -u\\Pi^{\\vec{K}}_{21}-u\\Pi^{-\\vec{K}}_{21} & 1-u\\Pi^{\\vec{K}}_{22}-u\\Pi^{-\\vec{K}}_{22}\n \\end{vmatrix}=\\nonumber\\\\\n 1-u(\\Pi^{\\vec{K}}_{11}+\\Pi^{-\\vec{K}}_{11}+\\Pi^{\\vec{K}}_{22}+\\Pi^{-\\vec{K}}_{22})\\nonumber\\\\\n +u^{2}(\\Pi^{\\vec{K}}_{11}+\\Pi^{-\\vec{K}}_{11})(\\Pi^{\\vec{K}}_{22}+\\Pi^{-\\vec{K}}_{22})\\nonumber\\\\\n -u^{2}(\\Pi^{\\vec{K}}_{12}+\\Pi^{-\\vec{K}}_{12})(\\Pi^{\\vec{K}}_{21}+\\Pi^{-\\vec{K}}_{21})=0,\n \\label{determin3.eq}\n \\end{eqnarray}\n \n By using Eqs.~\\ref{greenf-ne.eq},~\\ref{l-r-gamma.eq},~\\ref{new-gamma.eq} and Eq.~\\ref{pol.eq}, the polarization operators can be\n easily calculated, by performing summation on $\\epsilon=\\pi T (2n+1)$ and on $s,s'=\\pm1$, we find\n \n \\begin{eqnarray}\n \\Pi^{\\vec{K}}_{11}=\\Pi^{-\\vec{K}}_{11}=\\Pi^{\\vec{K}}_{22}=\\Pi^{-\\vec{K}}_{22}=\\frac{1}{2}\\int\\frac{d\\vec{p}}{(2\\pi)^2}\\frac{p+p'}{(i\\omega)^2-(p+p')^2},\\nonumber\\\\\n \\Pi^{\\vec{K}}_{12}= \\Pi^{-\\vec{K}}_{21}=-\\frac{1}{2}\\int\\frac{d\\vec{p}}{(2\\pi)^2}\\frac{e^{i(-\\phi+\\phi')}(p+p')}{(i\\omega)^2-(p+p')^2},\\nonumber\\\\\n \\Pi^{\\vec{K}}_{21}= \\Pi^{-\\vec{K}}_{12}=-\\frac{1}{2}\\int\\frac{d\\vec{p}}{(2\\pi)^2}\\frac{e^{i(\\phi-\\phi')}(p+p')}{(i\\omega)^2-(p+p')^2}.\\nonumber\\\\\n \\label{pol2.eq}\n \\end{eqnarray}\n \n Comparing Eq.~\\ref{pol2.eq} with~\\ref{chi-sub-lattices.eq} and~\\ref{determin1.eq} with~\\ref{determin3.eq}, we find that \n $\\chi^{0}_{AA}=\\chi^{0}_{BB}=\\Pi^{\\vec{K}}_{11}+\\Pi^{-\\vec{K}}_{11}=\\Pi^{\\vec{K}}_{22}+\\Pi^{-\\vec{K}}_{22}$ and\n $\\chi^{0}_{AB}=\\chi^{0}_{BA}=\\Pi^{\\vec{K}}_{12}+\\Pi^{-\\vec{K}}_{12}=\\Pi^{\\vec{K}}_{21}+\\Pi^{-\\vec{K}}_{21}$. Therefore,\n the sub-lattice representation give the same result as that in the representation of eigenstates in short range interaction.\n So in this representation we could not find zeros for Eq.~\\ref{determin3.eq}. Finally we found that\n the on-site Hubbard interaction can not lead to formation of neutral collective mode. The essential reason is overlap factors\n which resolve sub-lattices. In the first approach the overlap factors reside in the Green functions, and in the second\n approach these factors are prescribed to interaction vertices.\n \n \\section{Long range interaction: effective Coulomb interaction}\n \n The Coulomb interaction between electrons in graphene is usually considered as long-range, so both sub-lattices\n participate equally in it, since characteristic range of the interaction is much larger than the lattice constant.\n In this section we consider long range interaction for finding it's effect on spin-1 collective mode, which was considered\n in~\\cite{JPHCM.Jafari}. If the long range (Coulomb) interaction is included interaction between electrons with same and \n opposite spins on two sub-lattices, we know, it results the plasmon collective mode~\\cite{Wunsch}. The only long range interaction\n which lead to the Eq.~\\ref{main.eq} has the form of Eq.~\\ref{long-H-exch.eq}. Another reason for selecting this form for long range interaction is, the spin-1 collective\n mode make from interaction between electrons with opposite spins.\n\n If we assume that the Coulomb interaction is represented by constant $V$ between electrons with opposite spin (because the spin-1 mode\n considered between two electrons with opposite spin), we introduced this interaction Hamiltonian,\n \n \\begin{eqnarray}\n H_{Coul}=V\\sum_{ij}\\left \\{a^{\\uparrow \\dagger}_{i}a^{\\uparrow}_{i}a^{\\downarrow \\dagger}_{j}a^{\\downarrow }_{j}+\n a^{\\uparrow \\dagger}_{i}a^{\\uparrow}_{i}b^{\\downarrow \\dagger}_{j}b^{\\downarrow }_{j}\\right. \\nonumber \\\\\n \\left. +b^{\\uparrow \\dagger}_{i}b^{\\uparrow}_{i}a^{\\downarrow \\dagger}_{j}a^{\\downarrow }_{j} + \n b^{\\uparrow \\dagger}_{i}b^{\\uparrow}_{i}b^{\\downarrow \\dagger}_{j}b^{\\downarrow }_{j} \\right \\}.\n \\label{long-H.eq}\n \\end{eqnarray}\n \n In the exchange channel, we have\n \n \\begin{eqnarray}\n H_{Coul}=-V\\sum_{ij}\\left \\{a^{\\uparrow \\dagger}_{i}a^{\\downarrow}_{j}a^{\\downarrow \\dagger}_{j}a^{\\uparrow }_{i}+\n a^{\\uparrow \\dagger}_{i}b^{\\downarrow}_{j}b^{\\downarrow \\dagger}_{j}a^{\\uparrow }_{i}\\right. \\nonumber \\\\\n \\left. +b^{\\uparrow \\dagger}_{i}a^{\\downarrow}_{j}a^{\\downarrow \\dagger}_{j}b^{\\uparrow }_{i} + \n b^{\\uparrow \\dagger}_{i}b^{\\downarrow}_{j}b^{\\downarrow \\dagger}_{j}b^{\\uparrow }_{i} \\right \\}.\n \\label{long-H-exch.eq}\n \\end{eqnarray}\n \n If we perform the same calculations starting from Eq.~\\ref{H_int_new.eq}, as was done starting from Eq.~\\ref{H_int_new.eq}\n to derive Eq.~\\ref{determin3.eq}, we will arrive to the different system,\n \n \n\n \\begin{equation}\n \\begin{cases}\n \\chi_{AA}=\\chi^{0}_{AA}+V\\chi^{0}_{AA}\\chi_{AA}+V\\chi^{0}_{AB}\\chi_{BA}+V\\chi^{0}_{AA}\\chi_{BA}+V\\chi^{0}_{AB}\\chi_{BA}\\\\\n \\chi_{AB}=\\chi^{0}_{AB}+V\\chi^{0}_{AA}\\chi_{AB}+V\\chi^{0}_{AB}\\chi_{BB}+V\\chi^{0}_{AA}\\chi_{BB}+V\\chi^{0}_{AB}\\chi_{BB}\\\\\n \\chi_{BA}=\\chi^{0}_{BA}+V\\chi^{0}_{BA}\\chi_{AA}+V\\chi^{0}_{BB}\\chi_{BA}+V\\chi^{0}_{BA}\\chi_{BA}+V\\chi^{0}_{BB}\\chi_{BA}\\\\\n \\chi_{BB}=\\chi^{0}_{BB}+V\\chi^{0}_{BA}\\chi_{AB}+V\\chi^{0}_{BB}\\chi_{BB}+V\\chi^{0}_{BA}\\chi_{BB}+V\\chi^{0}_{BB}\\chi_{BB}\\\\\n \\end{cases}.\n \\label{coupling2.eq}\n \\end{equation}\n\n \n with the solvable condition \n \n \\begin{equation}\n 1-V(\\chi^{0}_{AA}+\\chi^{0}_{AB}+\\chi^{0}_{BA}+\\chi^{0}_{BB})=0,\n \\label{main.eq}\n \\end{equation}\n \n where the susceptibilities are same as Eq.~\\ref{chi-sub-lattices.eq}. This equation is same as with essential \n equation of~\\cite{Ebrahimkhas-IJMPB} which predicted neutral collective mode in honey comb lattice. \n If we consider $ \\chi^{0}_{triplet}=\\chi^{0}_{AA}+\\chi^{0}_{AB}+\\chi^{0}_{BA}+\\chi^{0}_{BB}$, the dispersion of\n the neutral collective mode can be obtained with one of this equations,\n \n \\begin{equation}\n \\Re{\\chi^{0}_{triplet}(\\vec{q},\\omega)}=1\/V~~~~or~~~~~ \\Im{\\chi^{0}_{triplet}(\\vec{q},\\omega)}=0.\n \\label{condition.eq}\n \\end{equation}\n \n \n The zeros of the Eqs.~\\ref{condition.eq} are collections of $(\\vec{q}, \\omega)$ which\n are plotted in Fig.~\\ref{e-h-dis.fig} for $V=3.2t$. This contour plot displays dispersion of neutral collective mode.\n The contour plot of $\\Re \\chi^{0}_{triplet}$ for $V=3.2t$ is plotted under the electron-hole $(e-h)$ continuum \n region in $\\Gamma \\rightarrow K$ direction. But we know this range of $U$ is too large for electrons in graphene.\n I checked the solution and zeros of Eq.~\\ref{condition.eq} for $0.0t3.2t$\n we could find dispersions for neutral collective mode near $\\Gamma$ point in $\\Gamma-K$ direction which are meaningless.\n \n \n So the long range interaction of type Eq.~\\ref{long-H.eq} can not formed the neutral collective mode\n in undoped graphene without $one-cone approximation$.\n \n \\begin{figure}[h]\n\\begin{center}\n\\vspace{-3mm}\n\\includegraphics[width=6cm,height=5cm,angle=0]{e_h_cont.eps}\n\\vspace{-3mm}\n\\caption{The contour plot of $\\Re{\\chi^{0}_{triplet}(\\vec{q},\\omega)}=1\/V$ is displayed \nfor undoped graphene for V=3.2t. The dispersion of spin collective mode appeared below e-h continuum (cyan region) but not in\nreasonable range of $V$.\nThe $y$ axis is energy ($\\epsilon (eV)$) $vs$ momentum ($q$). The direction in e-h continuum is $\\Gamma \\rightarrow K$.}\n\\label{e-h-dis.fig}\n\\end{center}\n\\vspace{-3mm}\n \\end{figure}\n \n \n The prediction of triplet collective mode in Ref.~\\cite{Ebrahimkhas-IJMPB} was due to\n the $single-cone$ approximation in short range on-site interaction that led to scalar form of polarization operator\n without $U^2$ terms in Eq.~\\ref{determin2.eq}. One-cone approximation \n which lead to elimination of some parts of Eq.~\\ref{determin2.eq}, is the only reason that predicted of the spin-1 collective mode. \n \n \n \\section{Conclusion}\n\n In this research the effects of short range Hubbard interaction in two representations and new \n effective long range interaction Hamiltonian was studied and introduced. The prediction of\n neutral collective mode in on-site and effective long range interaction is analyzed, and we found that in the RPA method\n on Hubbard model without single-cone approximation, mixing the overlap functions of two sub-lattices suppressed\n this collective mode. These two types of interactions have\n different Hamiltonians, Eq.~\\ref{H_int_new.eq} and Eq.~\\ref{long-H-exch.eq}, in both I considered interaction between\n electrons from the electrons with opposite spins. We can't find and predict neutral collective mode as the effect of\n effective long range and short range interaction because the overlap factors in both case resolve sub-lattices and suppress \n the neutral collective mode.\n\n \\section{Acknowledgment}\n \n I wish to thank N. Sharifzadeh for helpful discussions.\n The author appreciates and thanks so much from referee of our paper~\\cite{Ebrahimkhas-IJMPB} who guides to reach the main idea of this\n paper.\n\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLet $p$ be a prime, and let $I_1, I_2\\subseteq (0,p)$ be subintervals. This paper is motivated by determining conditions on $I_1,I_2$ under which we can ensure the solubility of the congruence\n\\[\nxy\\=1\\bmod p,\\quad (x,y)\\in I_1\\times I_2.\n\\]\nFrom a heuristic point of view we would expect this congruence to have a solution whenever $|I_1|,|I_2|\\gg p^{1\/2}$. However, as highlighted by Heath-Brown \\cite{H-B2000}, the best result to date requires that\n$|I_1|\\cdot|I_2|\\gg p^{3\/2}\\log^2 p$. The proof requires one to estimate incomplete Kloosterman sums\n\\[\nS(n,H)=\\sum_{\\substack{m=n+1\\\\mathbf{m}\\not\\= 0\\bmod p}}^{n+H}e\\left(\\frac{\\ell\\overline{m}}{p}\\right),\n\\]\nfor $\\ell\\in (\\mathbb{Z}\/\\ell\\mathbb{Z})^*$, for which the Weil bound yields\n\\begin{equation}\\label{eq:weil}\n|S(n,H)|\\le 2(1+\\log p) p^{1\/2}.\n\\end{equation}\nIt has been conjectured by Hooley \\cite{hooley} that\n$S(n,H)\\ll H^{1\/2}q^\\varepsilon,$ for any $\\varepsilon>0$,\nwhich would enable one to handle\nintervals with $|I_1|,|I_2|\\gg p^{2\/3+\\varepsilon}.$\nHowever such a bound appears to remain a distant prospect.\n\nA different approach to this problem involves considering a sequence of pairs of intervals $I_1^{(j)}, I_2^{(j)}$, for $1\\le j\\le J$, and to ask whether there is a value of $j$ for which there is a solution to the congruence\n\\begin{equation}\\label{eqn:intro.1}\nxy\\=1\\bmod p,\\quad (x,y)\\in I_1^{(j)}\\times I_2^{(j)}.\n\\end{equation}\nThere are some obvious degenerate cases here. For example, if we suppose that $I_1^{(j)}=I_2^{(j)}$ for all $j$, and that these run over all intervals of a given length $H$, then we are merely asking whether there is positive integer $h\\le H$ with the property that the congruence $x(x+h)\\=1\\bmod p$ has a solution $x\\in\\mbb{Z}$. This\nis equivalent to deciding whether the set\n$\\{h^2+4:1\\le h\\le H\\}$\ncontains a quadratic residue modulo $p$. When $H=2$, therefore, it is clear that this problem has a solution for all primes $p=\\pm 1\\bmod 8$. We avoid considerations of this sort by assuming that at least one of our sequences of intervals is pairwise disjoint. The following is our main result.\n\n\\begin{theorem}\\label{thm:inverses1}\nLet $H,K>0$ and let $I_1^{(j)}, I_2^{(j)}\\subseteq (0,p)$ be subintervals, for\n$1\\le j\\le J$, such that\n\\[|I_1^{(j)}|=H\\quad\\text{and}\\quad |I_2^{(j)}|=K\\]\nand\n\\[I_1^{(j)}\\cap I_1^{(k)}=\\emptyset\\quad\\text{for all}\\quad j\\not= k.\\]\nThen\nthere exists $j\\in \\{1,\\ldots,J\\}$ for which \\eqref{eqn:intro.1} has a solution\nif\n\\[\nJ\\gg \\frac{p^{3}\\log^4 p}{H^2K^2}.\n\\]\n\\end{theorem}\n\nIf we take $J=1$ in the theorem then we retrieve the above result that \\eqref{eqn:intro.1} is soluble when\n$HK\\gg p^{3\/2}\\log^2 p$. Alternatively, if we allow a larger value of $J$, then we can get closer to what would follow on\nHooley's hypothesised bound for $S(n,H)$.\n\n\n\\begin{corollary}\nWith notation as in Theorem \\ref{thm:inverses1}, suppose that $J\\gg p^{1\/3}$.\nThen\nthere exists $j\\in \\{1,\\ldots,J\\}$ for which \\eqref{eqn:intro.1} has a solution provided that\n$H>p^{2\/3}$ and $K>p^{2\/3}(\\log p)^2$.\n\\end{corollary}\n\n\nOur proof of Theorem \\ref{thm:inverses1} relies upon a mean value estimate for incomplete Kloosterman sums. These types of estimates have been studied extensively for multiplicative characters, especially in connection with variants of Burgess's bounds (see Heath-Brown \\cite{H-B2012} and the discussion therein).\nThe situation for Kloosterman sums is relatively under-developed (see Friedlander and Iwaniec \\cite{FI}, for example). The result we present here appears to be new, although many of our techniques are borrowed directly from the treatment of the analogous multiplicative problem \\cite[Theorem 2]{H-B2012}. The deepest part of our argument is an appeal to Weil's bound for Kloosterman sums. We will prove the following result in the next section.\n\n\n\\begin{theorem}\\label{thm:mvt1}\nIf $I_1,\\ldots ,I_J\\subseteq (0,p)$ are disjoint subintervals, with $H\/2<|I_j|\\leq H$ for each $j$, then for any $\\ell\\in (\\mathbb{Z}\/p\\mathbb{Z})^*$, we have\n\\[\n\\sum_{j=1}^J\\left|\\sum_{n\\in I_j}e\\left(\\frac{\\ell\\overline{n}}{p}\\right)\\right|^2\\leq 2^{12} p\\log^2H.\n\\]\n\\end{theorem}\n\nTaking $J=1$ shows that, up to a constant factor, this result includes as a special case the bound \\eqref{eq:weil} for incomplete Kloosterman sums.\n\n\\begin{TA}\nWhile working on this paper TDB was supported by\nEPSRC grant \\texttt{EP\/E053262\/1} and\nAH was supported by EPSRC grant \\texttt{EP\/J00149X\/1}.\nWe are grateful to Professor Heath-Brown for several useful discussions.\n\\end{TA}\n\n\\section{Proof of Theorem \\ref{thm:mvt1}}\n\nOur starting point is the following mean value theorem for $S(n,H)$.\n\n\\begin{lemma}\\label{lem:mvt1}\nFor $H\\in\\mbb{N}$ and $\\ell\\in (\\mathbb{Z}\/p\\mathbb{Z})^*$, we have\n$$\n\\sum_{n=1}^p\\left|S(n,H)\\right|^2\\leq \\frac{H^2}{p}+8pH.\n$$\n\\end{lemma}\n\\begin{proof}\nAfter squaring out the inner sum and interchanging the order of summation, the left hand side becomes\n$$\n\\sum_{h_1,h_2=1}^H\\sum_{\\substack{n=1\\\\n\\not= -h_1,-h_2\\bmod p}}^pe\\left(\\frac{\\ell(\\overline{n+h_1}-\\overline{n+h_2})}{p}\\right).\n$$\nUsing orthogonality of characters it is easy to see that the inner sum over $n$ is\n\\begin{align*}\n&=\\sum_{\\substack{x,y\\in (\\mathbb{Z}\/p\\mathbb{Z})^*\\\\ \\overline{x}-\\overline{y}\\=h_1-h_2\\bmod p}}\ne\\left(\\frac{\\ell(x-y)}{p}\\right)\\\\\n&=\n\\frac{1}{p}\n\\sum_{\\substack{x,y\\in (\\mathbb{Z}\/p\\mathbb{Z})^*}}\ne\\left(\\frac{\\ell(x-y)}{p}\\right)\n\\sum_{a=1}^p\ne\\left(\\frac{a( \\overline{x}-\\overline{y} )+a(h_2-h_1)}{p}\\right).\n\\end{align*}\nHence\n\\begin{align*}\n\\sum_{n=1}^p\\left|S(n,H)\\right|^2\n&=\n\\frac{1}{p}\n\\sum_{a=1}^p\n\\sum_{h_1,h_2=1}^H\ne\\left(\\frac{a(h_2-h_1)}{p}\\right)\n\\sum_{\\substack{x,y\\in (\\mathbb{Z}\/p\\mathbb{Z})^*}}\ne\\left(\\frac{\\ell(x-y)+a( \\overline{x}-\\overline{y} )}{p}\\right)\\\\\n&=\n\\frac{1}{p}\n\\sum_{a=1}^p\n|K(\\ell,a;p)|^2\n\\sum_{h_1,h_2=1}^H\ne\\left(\\frac{a(h_2-h_1)}{p}\\right),\n\\end{align*}\nwhere $K(\\ell,a;p)$ is the usual complete Kloosterman sum. The contribution from $a=p$ is\n$$\n\\frac{|K(\\ell,0;p)|^2 H^2}{p}=\\frac{H^2}{p},\n$$\nsince $p\\nmid \\ell$. The remaining contribution has modulus\n\\begin{align*}\n\\left|\\frac{1}{p}\n\\sum_{a=1}^{p-1}\n|K(\\ell,a;p)|^2\n\\sum_{h_1,h_2=1}^H\ne\\left(\\frac{a(h_2-h_1)}{p}\\right)\\right|\n&\\leq\n4\n\\sum_{0<|a|\\leq p\/2}\n\\left|\\sum_{h_1,h_2=1}^H\ne\\left(\\frac{a(h_2-h_1)}{p}\\right)\\right|\\\\\n&\\leq\n8\n\\sum_{00$ are (distinct) model parameters denoting the rates of \\emp{annihilation} and \\emp{pair creation}, respectively. With parameter $d\\in(-1,1)$ we can adjust the symmetry of (anti)particles' jumping rate. In plain words, the above dynamics boils down to the following simple rules: two adjacent holes can produce an antiparticle-particle pair (in that order) with rate $c$, antiparticles and particles can only hop to the negative and positive directions with rates $a\\cdot \\frac{1-d}{2}$ and $a\\cdot\\frac{1+d}{2}$, respectively, and when a particle meets an antiparticle they annihilate each other with rate $a$. All other moves are suppressed.\n\nBy rescaling time by $a$, without loss of generality, we can assume that the annihilation rate is set to be $1$. Next, we invoke the definition of \\emp{attractivity} from \\cite[pp.\\ 71--72]{liggettbible}. Formally speaking we require the relation $\\Exp f(\\omb(t))\\leq \\Exp f(\\etab(t))$ to hold for all monotone functions $f:\\Om\\to\\Rbb$ and for all $t\\geq0$ whenever the initial configuration $\\etab(0)$ of $\\etab$ dominates that of $\\omb$ in the coordinate-wise order. Along the lines of \\cite[III., Thm.\\ 2.2]{liggettbible}, attractivity is equivalent to $p$ being monotone non-decreasing (non-increasing) in its first (second) variable, respectively. This means that our dynamics is attractive if and only if $c\\leq \\frac{1-|d|}{2}$ which we assume throughout the paper. (We do not expect different behavior in the non-attractive regime either but hydrodynamics beyond the shock is not rigorously established there and the interesting phenomena of this paper happens in the attractive parameter domain anyway.)\n\nThe model outlined above is the simplest member, after simple exclusion, of the family of \\emp{misanthrope processes} \\cite{coco}. In the above particular form, it first appeared in \\cite[pp.\\ 179--180]{tothvalkoperturb}.\n\n\\section{Properties of the model}\\label{sec:props\n\nFirst, we introduce the one-parameter family of product-form extremal stationary distributions of the above model. Then in Subsections \\ref{sec:hydro} and \\ref{sec:flux} we establish some hydrodynamic properties of the model. These are then used in Section \\ref{sec:entropysols} to determine the entropy solution of the Riemann problem.\n\nDue to attractivity a very convenient feature of our model with rates \\eqref{eq:modelrates} is that it possesses \\emp{product-form stationary distributions}. We define\n\\begin{equation}\\label{eq:paramb}\nb :\\,= \\frac{1}{2 + \\frac{1}{\\sqrt{c}}},\n\\end{equation}\nwhich is a bijection between the parameters $c\\in(0,\\frac{1}{2}]$ and $b\\in(0,\\frac{1}{2+\\sqrt{2}}]$. Now, let $\\theta\\in\\Rbb$ be a generic real that is also referred to as the \\emp{chemical potential}. Then define the one-site marginal measure by\n\\begin{align}\\label{eq:statmarginal}\n\\Gamma^{\\theta} :\\,= \\big(\\Gamma^{\\theta}(-1),\\, \\Gamma^{\\theta}(0),\\, \\Gamma^{\\theta}(1)\\big) = \\bigg( \\frac{b\\, \\exp(-\\theta)}{Z(\\theta)},\\,\\frac{1 - 2b}{Z(\\theta)},\\,\\frac{b\\, \\exp(\\theta)}{Z(\\theta)} \\bigg),\n\\end{align}\nwith \\emp{partition sum} $Z(\\theta) :\\,= 1 + 2b\\cdot(\\cosh(\\theta) - 1)$. The product measure $\\Gmb^{\\theta}$ on $\\Om$ is built from these marginals: $\\Gmb^{\\theta} :\\,= \\bigotimes_{j\\in \\Zbb}\\Gamma^{\\theta}$. We denote the expectation with respect to this measure by $\\Exp^{\\theta}$.\nThe stationarity of the above measure was carried out in \\cite{coco}. For the ergodicity part we refer to \\cite[pp. 1350--1352, Sec. 3 and further references therein]{bagurasa}.\n\n\\subsection{Hydrodynamics}\\label{sec:hydro\n\nEulerian hydrodynamic limits are well established for a large class of \\emp{attractive} asymmetric particle systems \\cite{rezakhanlou,landim,kipnislandim} or more generally \\cite{bagurasa, bagurasastrong} that also includes our model. Below we recall some elements of this theory following the latter articles. Let $N$ be the rescaling factor and define\n\\[\n\\alpha^N(\\mathrm{d}x, t) :\\,= \\frac{1}{N}\\sum_{j\\in\\Zbb}\\om_{j}(t\\cdot N)\\cdot\\delta_{j\/N}(\\mathrm{d}x)\n\\]\nas the \\emp{rescaled empirical measure} of $\\omb(t)$ at macroscopic space and time $x\\in\\Rbb$ and $t\\geq0$, respectively.\n\nNow, the sequence of initial probability distributions $(\\pib_N)_{N\\in\\Zbb^+}$ can be chosen arbitrarily such that the empirical measure $\\alpha^N$ of $\\omb^N(0)$, distributed as $\\pib_N$, converges in probability to $u_0(\\,\\cdot\\,)\\,\\mathrm{d}x$, where $u_0$ is some deterministic bounded measurable profile on $\\Rbb$ (see \\cite[pp. 1346--1348, Sec. 2.3]{bagurasa}). Then for each $t>0$: the $\\alpha^N$ of $\\omb^N(t)$ converges to $u(\\,\\cdot\\,,t)\\,\\mathrm{d}x$ in probability, where the density profile $u(\\,\\cdot\\,,t):\\Rbb\\to[-1,1]$ is one of the weak solutions of the \\emp{conservation law}\n\\begin{equation}\\label{eq:hydrogeneral}\n\\begin{aligned}\n\\partial_t u + \\partial_x G(u) &= 0;\\\\\nu(\\,\\cdot\\,,0) &= u_0(\\cdot).\n\\end{aligned}\n\\end{equation}\nIn this PDE, the \\emp{hydrodynamic flux} $G:[-1,1]\\to\\Rbb_0^+$ is\n\\[\nG(\\varrho) = \\Exp_{\\nu_{\\varrho}} p(\\om_0,\\om_1)\\qquad (\\varrho\\in[-1,1]),\n\\]\nwhere the measure $\\nu_{\\varrho}$ is a \\emp{translation-invariant extremal stationary distribution} of the process $(\\omb(t))_{t\\geq0}$ corresponding to density $\\varrho$. In our case, as we discussed in the beginning of Section \\ref{sec:props}, these measures can be expressed in product form. In plain words $G$ explains the net flux across a bond in equilibrium (recall the dynamics of $\\omb$ being totally asymmetric) and it will be determined in Section \\ref{sec:flux}.\n\nSubsequently, we will only consider the \\emp{Riemann problem} (step initial datum) in view of the initial value problem \\ref{eq:hydrogeneral}, that is\n\\begin{equation}\\label{eq:stepinitcond}\nu_0(x) = \\left\\{\n\\begin{array}{ll}\n\\ul \\quad &\\mbox{if} \\quad x \\leq 0; \\\\\n\\ur \\quad &\\mbox{if} \\quad x > 0, \\\\\n\\end{array}\n\\right.\n\\end{equation}\nwhere $\\ul\\neq\\ur\\in[-1,1]$ are the (initial) densities on the left and the right-hand side of the origin. In case of the microscopic process for sake of simplicity we set the density values ($\\ul,\\ur$) by picking the appropriate site-dependent parameters $\\theta_j$ of the product measure $\\Gmb^{\\theta}$:\n\\begin{equation*}\n\\Exp^{\\theta_j}(\\om_j) =\n\\left\\{\n\\begin{array}{ll}\n\\ul \\quad &\\mbox{if} \\quad j \\leq 0; \\\\\n\\ur \\quad &\\mbox{if} \\quad j > 0. \\\\\n\\end{array}\n\\right.\n\\end{equation*}\nAs we discussed above the initial measure can be chosen from a more general set of measures. Indeed this choice of initial measures will not play any role in the present article.\n\nIt is more convenient to reparametrize the marginal $\\Gamma^{\\theta}$ by the (\\emp{signed}) \\emp{density} of particles instead of the chemical potential $\\theta$. Let $v(\\theta)$ be the expected number of particles occupying an arbitrary lattice point under the measure $\\Gmb^{\\theta}$:\n\\[\nv(\\theta) :\\,= \\Exp^{\\theta}\\om_0 = \\frac{2b\\,\\sinh(\\theta)}{1 + 2b\\cdot(\\cosh(\\theta) - 1)} \\qquad (\\theta\\in\\Rbb).\n\\]\nIt is not hard to see that the assignment $\\theta\\mapsto v(\\theta)\\in[-1,1]$ is strictly monotone hence bijective. Its inverse $v\\mapsto\\theta(v)$ can explicitly be calculated:\n\\begin{equation}\\label{eq:densityparam}\n\\theta(v)=\n\\log\\bigg(\\frac{(1 - 2b)\\cdot v + \\sqrt{4b^2 + (1-4b)\\cdot v^2}}{2b\\cdot(1-v)}\\bigg) \\qquad (v\\in[-1,1]).\n\\end{equation}\nWith a slight abuse of notation we will freely switch between these two parametrizations of the stationary measure.\n\nTo close this section, we mention a well-known strategy for solving \\eqref{eq:hydrogeneral}. When $G$ has continuous derivative and $u$ is smooth, one can rewrite \\eqref{eq:hydrogeneral} as $\\partial_t u + G'(u)\\cdot \\partial_x u = 0$. The \\emp{characteristic curves} are then defined as lines of the $x$-$t$ space along which the solution is constant. To find these curves one solves $\\frac{\\mathrm{d}}{\\mathrm{d}t}u(x(t),t)=0$ which translates to $\\frac{\\mathrm{d}x(t)}{\\mathrm{d}t}=G'(u)$ after comparison to the original equation. The curve then can be traced back to the initial condition and takes the form $x = x_0 + t\\cdot G'(u_0(x_0))$. The quantity $G'(u_0(x_0)))$ is known as the \\emp{characteristic velocity}. Finally, to look up the value of the solution $u$ at an arbitrary point $(x,t)\\in\\Rbb\\times\\Rbb^+_0$ one just follows the characteristic curve back to the initial time and reads the value of $u_0$ there.\n\nThe previous program relies on $u$ being smooth which sometimes just fails to happen. That is there can exist points of $\\Rbb\\times\\Rbb^+$ which more than one characteristic lines hit carrying different initial values. In these cases classical (differentiable) solutions of \\eqref{eq:hydrogeneral} cease to exist and more general, so-called \\emp{weak solutions}, need to be defined. By a careful selection of the weak solutions one can uniquely identify the physically relevant one which is also referred to as the \\emp{entropy solution}. For more details and properties of the entropy solutions we refer to the monographs \\cite{smoller,holdenrisebro}.\n\nIn our case the initial condition $u_0$ is the step function \\eqref{eq:stepinitcond} hence the characteristic velocities can only attain two values: $G'(\\ul)$ and $G'(\\ur)$. Depending on the relation between the densities $\\ul,\\ur$ and on the monotonicity of $G'$ in $[\\min(\\ul,\\ur),\\max(\\ul,\\ur)]$ several possibilities can emerge. In the simplest case when $G$ is strictly concave one can essentially distinguish two cases:\n(1) if $G'(\\ul) > G'(\\ur)$ then the characteristic lines meet in some finite time hence a \\emp{shock wave}, that is a moving discontinuity appears in the entropy solution;\n(2) if $G'(\\ul) < G'(\\ur)$ then the characteristic lines are moving away from each other giving rise to a \\emp{rarefaction fan} in which the initial sharp discontinuity smears out in time.\n\nIn the general case, based on \\cite[pp. 30--34 of Sec. 2.2]{holdenrisebro}, the unique entropy solution of \\eqref{eq:hydrogeneral} with initial condition \\eqref{eq:stepinitcond} can explicitly be given as\n\\begin{equation}\\label{eq:generalentropysol}\nu(x,t) =\n\\left\\{\n\\begin{aligned}\n&\\ul &&\n\\text{if $x\\leq (G_{\\frown})'(\\ul)\\cdot t$};\\\\\n&\\big[(G_{\\frown})'\\big]^{-1}(x\/t) &&\n\\text{if $(G_{\\frown})'(\\ul)\\cdot t < x\\leq (G_{\\frown})'(\\ur)\\cdot t$};\\\\[0.3em]\n&\\ur &&\n\\text{if $x > (G_{\\frown})'(\\ur)\\cdot t$},\n\\end{aligned}\n\\right.\n\\end{equation}\nwhere $\\ul>\\ur$ and $G_{\\frown}$ is the \\emp{upper concave envelope} of $G$ defined to be the smallest concave function which is greater than or equal to $G$ above the interval $[\\ur,\\ul]$. The derivative of $G_{\\frown}$ is denoted by $(G_{\\frown})'$, while its inverse by $[(G_{\\frown})']^{-1}$. Analogously, in the opposite case when $\\ur>\\ul$ the solution can be obtained by the help of the \\emp{lower convex envelope} of $G$.\n\n\\subsection{Flux}\\label{sec:flux\n\nWe explicitly calculate the \\emp{hydrodynamic flux} $G$ of \\eqref{eq:hydrogeneral} corresponding to our microscopic model. As we noted in the previous Subsection \\ref{sec:hydro} the hydrodynamic flux is the expected current rate across a bond under the translation-invariant extremal stationary distribution $\\Gmb^{\\theta}$ (see the beginning of Section \\ref{sec:props}). That is\n\\begin{align*}\nG(v) = \\Exp^{\\theta(v)} p(\\om_0,\\om_1)\n& = \\frac{1 - d}{2}\\cdot \\Gamma^{\\theta(v)}(0)\\cdot\\Gamma^{\\theta(v)}(-1)\n+ \\frac{1 + d}{2}\\cdot \\Gamma^{\\theta(v)}(1)\\cdot\\Gamma^{\\theta(v)}(0)\\\\\n& + c \\cdot \\Gamma^{\\theta(v)}(0)\\cdot\\Gamma^{\\theta(v)}(0)\n+ 1 \\cdot \\Gamma^{\\theta(v)}(1)\\cdot\\Gamma^{\\theta(v)}(-1),\n\\end{align*}\nwhere $v\\in[-1,1]$. We remark that even the last term of the annihilation step counts as one towards the flux as only a unit signed charge is transferred over the bond in question.\n\nNow, substituting the measure $\\Gamma^{\\theta(v)}$ of \\eqref{eq:statmarginal} into the previous display we get\n\\begin{equation*}\nG(v) = \\frac{b\\cdot(1 - 2b)\\cdot(\\cosh(\\theta(v)) + d \\cdot \\sinh(\\theta(v))) + 2 b^2}{(1 + 2b\\cdot(\\cosh(\\theta(v)) - 1))^2}.\n\\end{equation*}\nNote that $G$ is \\emp{even} (i.e.\\ symmetric to the origin) if and only if $d=0$. This case is shown in Figure \\ref{fig:flux}. In the upcoming sections we will only investigate the case when $d=0$.\n\nNow, taking into consideration \\eqref{eq:densityparam} we can realize the density parametrization of the flux. Elementary calculus in combination with hydrodynamic results as described in Section \\ref{sec:hydro} then yield\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[width=0.75\\textwidth]{flux}\n\\caption[Hydrodynamic flux]{The hydrodynamic flux $G$ in the case $d=0$. A vertical slice of the surface gives a particular density-flux assignment for a $b$ value.}\\label{fig:flux}\n\\end{figure}\n\\begin{theorem}\\label{thm:main}\n The hydrodynamic limit as described in Section \\ref{sec:hydro} applies to our model \\eqref{eq:infgen} with rates \\eqref{eq:modelrates} in the attractive range $c\\leq\\frac{1-|d|}2$ with hydrodynamic flux $H:[-1, 1]\\to[0,+\\infty)$ given by\n\\begin{equation*}\nH(v) = \\frac{v\\cdot(d - v)}{2} +\n\\frac{(1 - d\\cdot v)\\cdot(4 b^2 + (1-2b)^2\\cdot v^2)}{2 \\,\\big(4 b^2 + (1 - 2 b) \\cdot\\sqrt{4 b^2 + (1 - 4 b)\\cdot v^2}\\big)}\n\\qquad (v\\in[-1,1]),\n\\end{equation*}\nwhere $b = \\frac{1}{2 + c^{-1\/2}}$ and $c,d > 0$ (see \\eqref{eq:paramb}). In the case $d=0$, we simply denote the flux by $G$:\n\\begin{equation}\\label{eq:fluxsym}\nG(v) = -\\frac{v^2}{2} +\n\\frac{4 b^2 + (1-2b)^2\\cdot v^2}{2\\,\\big(4 b^2 + (1 - 2 b)\\cdot\\sqrt{4 b^2 + (1 - 4 b)\\cdot v^2}\\big)} \\qquad (v\\in[-1,1]).\n\\end{equation}\nFor $G$ the following hold: if\n$b \\in \\big(\\frac{1}{6},\\,\\frac{1}{2+\\sqrt{2}}\\big]\\Longleftrightarrow c \\in \\big(\\frac{1}{16},\\,\\frac{1}{2}\\big]$\n($b = \\frac{1}{6}\\;\\Longleftrightarrow\\; c=\\frac{1}{16}$), then it is strictly (non-strictly) concave in $[-1,1]$. Otherwise when\n$b \\in \\big(0,\\, \\frac{1}{6}\\big)\\Longleftrightarrow c \\in \\big(0,\\, \\frac{1}{16}\\big)$\nit is neither concave nor convex having two inflection points\n\\begin{equation}\\label{eq:inflpoints}\n\\pm v_{\\infl} = \\pm\\sqrt{\\frac{(2 b^2 (1-2b))^{2\/3} - 4 b^2}{1-4b}},\n\\end{equation}\nwhich separate a concave-convex ($-v_{\\infl}$) and a convex-concave ($v_{\\infl}$) region, respectively.\n\\end{theorem}\n\n\\section{Entropy solutions}\\label{sec:entropysols\n\nWe find the entropy solutions of the Riemann problem \\eqref{eq:hydrogeneral} with step initial datum \\eqref{eq:stepinitcond} for all possible pairs of initial densities $\\ul,\\ur\\in[-1,1]$. Recalling Section \\ref{sec:hydro}, for purely concave flux two valid scenarios can only happen: the discontinuity at zero is preserved over time (\\emp{shock wave}) or it immediately smears out (\\emp{rarefaction fan}). As we have seen the flux $G$ can be non-concave in some parameter region and this results in plenty of qualitatively different solutions for \\eqref{eq:hydrogeneral} to be discussed in more details below.\n\n\\paragraph{Auxiliary calculations.} Based on Theorem \\ref{thm:main}, we introduce some further notations being valid \\emp{only} in the range $b\\in(0,\\frac{1}{6})$. The positive global maximum point of \\eqref{eq:fluxsym} is given by\n\\[\nv_{\\max} = \\frac{1}{2}\\sqrt{\\frac{(1 + 2b)(1 - 6b)}{1 - 4b}},\n\\]\nwhile the negative one is at $-v_{\\max}$. The maximum value is then $G(v_{\\max}) = G(-v_{\\max}) = (1 - 2b)^2 \/ (8 - 32b)$.\nAt $0$ the tangent line is horizontal and $G$ has a local minimum with $G(0) = b$. This tangent intersects the curve in two points symmetrically to the origin. The positive one is at\n\\begin{equation*}\nv_0 = \\sqrt{\\frac{(1 - 2b)(1 - 6b)}{1 - 4b}},\n\\end{equation*}\nwhile the negative one is at $-v_0$. See Figure \\ref{fig:tangent}.\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[width=0.65\\paperwidth]{tangent\n\\caption[Flux (non-concave case)]{Hydrodynamic flux $G$ in the non-concave case and its tangents, where $b=0.08$ and $d=0$. From a general point $v\\in[-1,1]$ one can possibly draw at most two lines that are tangential to $G$ (see $v_e^{-}$ and $v_e^{+}$). On the other hand a tangent line at $v_e$ can have two intersections with the graph of $G$ (see $v_m^{-}$ and $v_m^{+}$). $\\pm v_{\\max}$ are the maximum points while $\\pm v_{0}$ are the intersections of $G$ with the constant (dashed) line at $b$.}\\label{fig:tangent}\n\\end{figure}\n\nTo determine the lower convex and upper concave envelopes of $G$ for different densities (recalling \\eqref{eq:generalentropysol}) and to finally obtain the phase diagram we need to address some further special points of the flux $G$. Below we essentially define the tangential points which will be denoted by $v_e$'s whereas $v_m$'s will indicate the intersection points of $G$ with a tangent line. In more details the tangent line drawn from the point $(v_e, G(v_e))$ intersects the graph $G$ at (another) point $(v,G(v))$ if and only if it satisfies\n\\begin{equation}\\label{eq:tangentpoint}\nG(v) = G(v_{e}) + G'(v_{e})(v - v_{e}).\n\\end{equation}\nWe can look at \\eqref{eq:tangentpoint} in two different ways.\n\\begin{itemize}[leftmargin=*]\n\\item One can look for an intersection $(v_m,G(v_m))$ of the graph and the tangent line touching at a \\emp{given} $(v_{e}, G(v_e))$. The intersection that lies on the left (right) of $v_{e}$ is denoted by $v_m^{-}(v_{e})$ [$v_m^{+}(v_{e})$]. If $v_m^{-}(v_{e})$ [$v_m^{+}(v_{e})$] did not exist then $v_m^{-}(v_{e}):\\,=-\\infty$ [$v_m^{+}(v_{e}):\\,=+\\infty$]. See Figure \\ref{fig:tangent}.\n\\item From a \\emp{fixed} $(v,G(v))$ one can solve \\eqref{eq:tangentpoint} for a tangent point $(v_{e},G(v_e))$ of $G$. If there exist more than one solutions then let $v_{e}^{-}(v)$ be the closer while let $v_{e}^{+}(v)$ be the farther one from $v$. By default $v_{e}(v):\\,=v_{e}^{-}(v)$. If only one solution exits then $v_{e}(v)=v_{e}^{-}(v)=v_{e}^{+}(v)$. See Figure \\ref{fig:tangent}.\n\\end{itemize}\nIn the following the previously defined points, illustrated by Figure \\ref{fig:tangent}, are heavily used to determine those regions where the dynamics follows a shock wave or a rarefaction fan.\n\\paragraph{Phase diagram.} Notice that if $G''$ does not change sign in $[\\min(\\ul,\\ur),\\max(\\ul,\\ur)]$, furthermore the secant of $G$ connecting $(\\ul,G(\\ul))$ with $(\\ur,G(\\ur))$ is located \\emp{below} (\\emp{above}) its graph and\n\\begin{itemize}[leftmargin=*]\n\\item $\\ul < \\ur$ ($\\ul > \\ur$), then the characteristic lines `converge' and collide resulting in the formation of a \\emp{shock wave}.\n\\item $\\ul > \\ur$ ($\\ul < \\ur$), then the characteristic lines `diverge' and the initial discontinuity smoothens out according to the physically relevant entropy solution, i.e.\\ a \\emp{rarefaction fan} forms.\n\\end{itemize}\nOur first task is to explore these simple regions. Then in the general case we find the lower convex as well as upper concave envelopes of $G$ which will give shock and rarefaction parts of the solution via formula \\eqref{eq:generalentropysol}. These functions can be expressed in a rather simple way by using the previously defined $v_e$'s and $v_m$'s.\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[width=0.7\\paperwidth]{rsr-srs\n\\caption[RSR and SRS]{Rarefaction fan -- Shock wave -- Rarefaction fan (left) and Shock wave -- Rarefaction fan -- Shock wave (right) profiles evolving in space-time when $\\ul=1=-\\ur$ and $\\ul=-v_{\\max}=-\\ur$, respectively, where $b=0.08$ and $d=0$.}\n\\label{fig:rsr-srs}\n\\end{figure}\n\nThe general case is as follows. Observe that if $b\\geq\\frac{1}{6}$ then shock wave (rarefaction fan) forms when $\\ul < \\ur$ ($\\ul > \\ur$). Henceforth we can assume that $b$ belongs to $(0,\\frac{1}{6})$. For the simpler parts of the phase diagram we can apply the above rule. Recall the inflection points of \\eqref{eq:inflpoints}.\n\\begin{itemize}[leftmargin=*]\n\\item[] \\textbf{Shock wave (\\textrm{S}) regions}. If $\\ul < \\ur$, then $-1\\leq \\ul \\leq v_{m}^{-}(v_{e}(1))$ and $\\ur \\geq v_{m}^{+}(v_{e}(\\ul))$; $v_{m}^{+}(v_{e}(-1)) \\leq \\ur \\leq 1$ and $\\ul \\leq v_{m}^{-}(v_{e}(\\ur))$; $-1\\leq \\ul\\leq -v_{\\infl}$ and $\\ur\\leq v_e(\\ul)$; $v_{\\infl}\\leq \\ur \\leq 1$ and $\\ul \\geq v_e(\\ur)$. If $\\ul > \\ur$, then $\\ul,\\ur\\in[-v_{\\infl},v_{\\infl}]$; furthermore $v_{\\infl} \\leq \\ul \\leq v_{\\max}$ ($-v_{\\max}\\leq \\ur\\leq -v_{\\infl}$) and $v^{+}_e (\\ul) \\leq \\ur \\leq v^{-}_m(\\ul)$ ($v^{+}_m(\\ur)\\leq \\ul \\leq v^{+}_e(\\ur)$).\n\\item[] \\textbf{Rarefaction fan (\\textrm{R}) regions}. If $\\ul < \\ur$, then $\\ul,\\ur\\in[-v_{\\infl},v_{\\infl}]$. On the other hand, if $\\ul > \\ur$, then $\\ul,\\ur\\in[-1,-v_{\\infl}]$ or $\\ul,\\ur\\in[v_{\\infl},1]$.\n\\end{itemize}\n\nNow, it becomes a much more complex situation if the secant crosses the graph of $G$ above the interval $(\\min(\\ul,\\ur),\\max(\\ul,\\ur))$. We will discuss these in the following.\n\\begin{itemize}[leftmargin=*]\n\\item[] \\textbf{Rarefaction fan -- Shock wave (\\textrm{RS}) regions}. If $\\ul < \\ur$, then $v_{\\infl} < \\ur \\leq 1$ and $-v_{\\infl} \\leq \\ul < v_{e}(\\ur)$. On the other hand if $\\ul > \\ur$, then $-v_{\\max} \\leq \\ur < v_{\\infl}$ and $v_{e}^{+}(\\ur) < \\ul \\leq 1$.\n\\item[] \\textbf{Shock wave -- Rarefaction fan (\\textrm{SR}) regions}. If $\\ul < \\ur$, then $-1 \\leq \\ul < -v_{\\infl}$ and $v_{e}(\\ul) < \\ur \\leq v_{\\infl}$. On the other hand if $\\ul > \\ur$, then $-v_{\\infl} < \\ul \\leq v_{\\max}$ and $-1 \\leq \\ur < v_{e}^{+}(\\ul)$.\n\\item[] \\textbf{Rarefaction fan -- Shock wave -- Rarefaction fan (\\textrm{RSR}) region}: $\\ul > v_{\\max}$ and $\\ur < -v_{\\max}$. The shock wave in the middle has zero speed and $\\lim_{x\\to0^+}u(x,t)-\\lim_{x\\to0^-}u(x,t) = -2v_{\\max}$ for all $t>0$. See Figure \\ref{fig:rsr-srs}.\n\\item[] \\textbf{Shock wave -- Rarefaction fan -- Shock wave (\\textrm{SRS}) region}: $-1 \\leq \\ul < -v_{\\infl}$ and $v_{\\infl} < \\ur < v_{m}^{+}(v_{e}(\\ul))$.\n The shocks are connected by a rarefaction fan and they are moving away from each other. See Figure \\ref{fig:rsr-srs}.\n\\end{itemize}\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[width=0.6\\paperwidth]{phasediagram_4\n\\caption[Phase diagram]{Phase diagram of the model with varying values of $b$, where $d=0$. The $x$ and $y$ axes represent the initial density on the left ($\\ul$) and the right ($\\ur$), respectively. $\\mathbf{S}$ ($\\mathbf{R}$) denotes a shock wave (rarefaction fan).}\\label{fig:phasediagram}\n\\end{figure}\n\nThe whole phase diagram can be seen in Figure \\ref{fig:phasediagram} for four different $b$ values. The horizontal and vertical axes correspond to the initial density on the left ($\\ul$) and the right ($\\ur$), respectively. $\\mathbf{S}$ denotes the formation of a shock wave, while $\\mathbf{R}$ corresponds to a rarefaction fan. The model in the assumed $d=0$ case has a symmetry of swapping particles and antiparticles and flipping direction of the jumps at the same time. This results in the transformation $(\\ul,\\ur)\\to (-\\ur, -\\ul)$ which shows as a symmetry on the diagrams (including reversing the order of $\\mathbf{R}$'s and $\\mathbf{S}$'s due to reflection of space). The other diagonal, $(\\ul,\\ur)\\to (\\ur,\\ul)$ is \\emp{not} a symmetry of the particle system due to positive particles jumping to the right, negatives to the left which fundamentally influences the hydrodynamic behavior.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nImage-to-image translation attempts to convert the image appearance from one domain to another while preserving the intrinsic image content.\nMany computer vision tasks can be formalized as a certain image-to-image translation problem, such as super-resolution~\\cite{ledig2016photo,shi2016real}, image colorization~\\cite{zhang2016colorful,zhang2017real,iizuka2016let}, image segmentation \\cite{long2015fully,eigen2015predicting}, and image synthesis \n\\cite{chen2017photographic,simo2016learning,xie2015holistically,laffont2014transient,bozhao2017arxiv}. \nHowever, conventional image-to-image translation methods are all task specific.\nA common framework for universal image-to-image translation remains as an emerging research subject in the literature, which has gained considerable attention in recent studies\n\\cite{isola2016image,zhu2017unpaired,kim2017learning,liu2017unsupervised,yi2017dualgan}.\n\n\\begin{figure}[t]\n\\begin{center}\n \\centering\n \\resizebox{1\\linewidth}{!}{\n \\begin{minipage}[t]{1.0\\linewidth}\n \\centering\n \\includegraphics[width=0.85\\linewidth]{fig1.pdf}\n \\end{minipage}\n }\n \\caption{\n Given unpaired images from two domains, our proposed \\proposedd~ learns the image-to-image translation by a stacked structure in a coarse-to-fine manner.\n For the Cityscapes \\textit{Labels $\\rightarrow$ Photo} task in $512 \\times 512$ resolution, the result of \\proposedd~ (left) appears more realistic and includes finer details compared with the result of CycleGAN~\\cite{zhu2017unpaired} (right).}\n \\label{fig:1}\n\\end{center}\n\\end{figure}\n\nIsola \\textit{et al}.~\\cite{isola2016image} leveraged the power of \ngenerative adversarial networks (GANs) \\cite{goodfellow2014generative,mirza2014conditional,bozhao2018arxiv}, which encourage the translation results to be indistinguishable from real images in the target domain, \nto learn image-to-image translation from image pairs in a supervised fashion.\nHowever, obtaining pairwise training data is time-consuming and heavily relies on human labor.\nRecent works \\cite{zhu2017unpaired,kim2017learning,liu2017unsupervised,yi2017dualgan} explore tackling the image-to-image translation problem without using pairwise data.\nUnder the unsupervised setting, besides the traditional adversarial loss used in supervised image-to-image translation, a cycle-consistent loss is introduced to restrain the two cross-domain transformations $G$ and $F$ to be the inverses of each other (\\textit{i}.\\textit{e}., $G(F(x))\\approx x$ and $G(F(y))\\approx y$).\nBy constraining both of the adversarial and cycle-consistent losses, the networks learn how to accomplish cross-domain transformations without using pairwise training data.\n\nDespite the progress mentioned above, existing unsupervised image-to-image translation methods may generate inferior results when two image domains are of significant appearance differences or the image resolution is high.\nAs shown in Figure \\ref{fig:1}, the result of CycleGAN~\\cite{zhu2017unpaired} in translating a Cityscapes semantic layout to a realistic picture lacks details and remains visually unsatisfactory.\nThe reason for this phenomenon lies in the significant visual gap between the two distinct image domains, which makes the cross-domain transformation too complicated to be learned by running a single-stage unsupervised approach.\n\nJumping out of the scope of unsupervised image-to-image translation, many methods have leveraged the power of multi-stage refinements to tackle image generation from latent vectors~\\cite{denton2015deep,karras2017pregressive}, caption-to-image~\\cite{zhang2016stackgan}, and supervised image-to-image translation~\\cite{chen2017photographic,eigen2015predicting,wang2017high}.\nBy generating an image in a coarse-to-fine manner, a complicated transformation is broken down into easy-to-solve pieces.\nWang \\textit{et al}.~\\cite{wang2017high} successfully tackled the high-resolution image-to-image translation problem in such a coarse-to-fine manner with multi-scale discriminators.\nHowever, their method relies on pairwise training images, so cannot be directly applied to our studied unsupervised image-to-image translation task.\nTo the best of our knowledge, there exists no attempt to exploit stacked networks to overcome the difficulties encountered in learning unsupervised image-to-image translation.\n\nIn this paper, we propose the stacked cycle-consistent adversarial networks (SCANs) aiming for unsupervised learning of image-to-image translation. \nWe decompose a complex image translation into multi-stage transformations, including a coarse translation followed by multiple refinement processes.\nThe coarse translation learns to sketch a primary result in low-resolution.\nThe refinement processes improve the translation by adding details into the previous results to produce higher resolution outputs.\nWe adopt a conjunction of an adversarial loss and a cycle-consistent loss in all stages to learn translations from unpaired image data.\nTo benefit more from multi-stage learning, we also introduce an adaptive fusion block in the refinement processes to learn the dynamic integration of the current stage's output and the previous stage's output. \nExtensive experiments demonstrate that our proposed model can not only generate results with realistic details, but also enable us to learn unsupervised image-to-image translation in higher resolution.\n\nIn summary, our contributions are mainly two-fold. Firstly, we propose SCANs to model the unsupervised image-to-image translation problem in a coarse-to-fine manner for generating results with finer details in higher resolution.\nSecondly, we introduce a novel adaptive fusion block to dynamically integrate the current stage's output and the previous stage's output, which outperforms directly stacking multiple stages.\n\n\n\\section{Related Work}\n\n\\smallskip\n\\noindent\\textbf{Image-to-image translation.} GANs~\\cite{goodfellow2014generative} have shown impressive results in a wide range of tasks including super-resolution \\cite{ledig2016photo,shi2016real}, video generation \\cite{xiong2018gan}, image colorization \\cite{isola2016image}, image style transfer \\cite{zhu2017unpaired} etc. \nThe essential part of GANs is the idea of using an adversarial loss that encourages the translated results to be indistinguishable from real target images.\nAmong the existing image-to-image translation works using GANs, perhaps the most well-known one would be Pix2Pix \\cite{isola2016image}, in which Isola \\textit{et al}.~applied GANs with a regression loss to learn pairwise image-to-image translation.\nDue to the fact that pairwise image data is difficult to obtain, image-to-image translation using unpaired data has drawn rising attention in recent studies.\nRecent works by Zhu \\textit{et al}.~\\cite{zhu2017unpaired}, Yi \\textit{et al}.~\\cite{yi2017dualgan}, and Kim \\textit{et al}.~\\cite{kim2017learning} have tackled the image translation problem using a combination of adversarial and cycle-consistent losses. \nTaigman \\textit{et al}.~\\cite{taigman2016unsupervised} applied cycle-consistency in the feature level with the adversarial loss to learn a one-side translation from unpaired images.\nLiu \\textit{et al}.~\\cite{liu2017unsupervised} used a GAN combined with Variational Auto Encoder (VAE) to learn a shared latent space of two given image domains. \nLiang \\textit{et al}.~\\cite{liang2017generative} combined the ideas of adversarial and contrastive losses, using a contrastive GAN with cycle-consistency to learn the semantic transform of two given image domains with labels.\nInstead of trying to translate one image to another domain directly, our proposed approach focuses on exploring refining processes in multiple steps to generate a more realistic output with finer details by harnessing unpaired image data.\n\n\\smallskip\n\\noindent\\textbf{Multi-stage learning.}\nExtensive works have proposed to choose multiple stages to tackle complex generation or transformation problems.\nEigen \\textit{et al}.~\\cite{eigen2015predicting} proposed a multi-scale network to predict depth, surface, and segmentation, which learns to refine the prediction result from coarse to fine. \nS2GAN introduced by Wang \\textit{et al}.~\\cite{wang2016generative} utilizes two networks arranged sequentially to first generate a structure image and then transform it into a natural scene.\nZhang \\textit{et al}.~\\cite{zhang2016stackgan} proposed StackGAN to generate high-resolution images from texts, which consists of two stages: the Stage-I network generates a coarse, low-resolution result, while the Stage-II network refines the result into a high-resolution, realistic image.\nChen \\textit{et al}.~\\cite{chen2017photographic} applied a stacked refinement network to generate scenes from segmentation layouts.\nTo accomplish generating high-resolution images from latent vectors, Kerras \\textit{et al}.~\\cite{karras2017pregressive} started from generating a $4\\times 4$ resolution output, and then progressively stacked up both a generator and a discriminator to generate a $1024 \\times 1024$ realistic image.\nWang \\textit{et al}.~\\cite{wang2017high} applied a coarse-to-fine generator with a multi-scale discriminator to tackle the supervised image-to-image translation problem.\nDifferent form the existing works, this work exploits stacked image-to-image translation networks coupled with a novel adaptive fusion block to tackle the unsupervised image-to-image translation problem.\n\n\n\\section{Proposed Approach}\n\n\\subsection{Formulation}\nGiven two image domains $X$ and $Y$, the mutual translations between them can be denoted as two mappings $G:X \\to Y$ and $F:Y \\to X$, each of which takes an image from one domain and translates it to the corresponding representation in the other domain.\nExisting unsupervised image-to-image translation approaches~\\cite{zhu2017unpaired,yi2017dualgan,kim2017learning,liu2017unsupervised,taigman2016unsupervised} finish the learning of $G$ and $F$ in a single stage, which generate results lacking details and are unable to handle complex translations.\n\nIn this paper, we decompose translations $G$ and $F$ into multi-stage mappings. For simplicity, now we describe our method in a two-stage setting. Specifically, we decompose $G = G_{2} \\circ G_{1}$ and $F = F_{2} \\circ F_{1}$. $G_{1}$ and $F_{1}$ (\\textbf{Stage-1}) perform the cross-domain translation in a coarse scale, while $G_{2}$ and $F_{2}$ (\\textbf{Stage-2}) serve as refinements on the top of the outputs from the previous stage.\nWe first finish the training of Stage-$1$ in low-resolution and then train Stage-$2$ to learn refinement in higher resolution based on the fixed Stage-$1$. \n\n\nTraining two stages in the same resolution would make Stage-$2$ difficult to bring further improvement, as Stage-$1$ has already been optimized with the same objective function (see Section \\ref{sec:comp}). \nOn the other hand, we find that learning in a lower resolution allows the model to generate visually more natural results, since the manifold underlying the low-resolution images is easier to model.\nTherefore, first, we constrain Stage-$1$ to train on 2x down-sampled image samples, denoted by $X_{\\downarrow} $ and $Y_{\\downarrow}$, to learn a base transformation. Second, based on the outputs of Stage-$1$, we train Stage-$2$ with image samples $X$ and $Y$ in the original resolution.\nSuch a formulation exploits the preliminary low-resolution results of Stage-$1$ and guides Stage-$2$ to focus on up-sampling and adding finer details, thus helping improve the overall translation quality.\n\nIn summary, to learn cross-domain translations $G: X \\to Y$ and $F: Y \\to X$ on given domains $X$ and $Y$, we first learn preliminary translations $G_1: X_{\\downarrow} \\to Y_{\\downarrow}$ and $F_1: Y_{\\downarrow} \\to X_{\\downarrow}$ at the 2x down-sampled scale. Then we use $G_{2}: X_{\\downarrow} \\to X$ and $F_2: Y_{\\downarrow} \\to Y$ to obtain the final output with finer details in the original resolution. Notice that we can iteratively decompose $G_2$ and $F_2$ into more stages.%\n\n\\subsection{Stage-1: Basic Translation}\n\\label{sec:stage1}\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[width=0.4\\linewidth]{stage1.pdf}\n\\end{center}\n \\caption{Illustration of an overview of Stage-$1$ for learning coarse translations in low-resolution under an unsupervised setting. Solid arrow denotes an input-output, and dashed arrow denotes a loss.}\n\\label{fig:stage1}\n\\end{figure}\n\nIn general, our Stage-$1$ module adopts a similar architecture of CycleGAN~\\cite{zhu2017unpaired}, which consists of two image translation networks $G_{1}$, and $F_{1}$ and two discriminators $D_{X_1}, D_{Y_1}$. Note that Stage-$1$ is trained in low-resolution image domains $X_{\\downarrow}$ and $Y_{\\downarrow}$. \nFigure \\ref{fig:stage1} shows an overview of the Stage-$1$ architecture.\n\nGiven a sample $\\mathbf{x}_1 \\in X_{\\downarrow}$, $G_1$ translates it to a sample $\\mathbf{\\hat{y}}_1 = G_1(\\mathbf{x}_1)$ in the other domain $Y_{\\downarrow}$. On one hand, the discriminator $D_{Y_1}$ learns to classify the generated sample $\\mathbf{\\hat{y}}_1$ to class $0$ and the real image $\\mathbf{y}$ to class $1$, respectively. On the other hand, $G_1$ learns to deceive $D_{Y_1}$ by generating more and more realistic samples. This can be formulated as an adversarial loss:\n\\begin{align}\\begin{split}\n\\mathcal{L}_{adv}(G_1, D_{Y_1}, & X_\\downarrow, Y_\\downarrow) = \n\\mathbb{E}_{\\mathbf{y} \\sim Y\\downarrow}\\left[\\log(D_{Y_1}(\\mathbf{y}))\\right] \\\\\n&+ \\mathbb{E}_{\\mathbf{x} \\sim X\\downarrow}\\left[\\log(1-D_{Y_1}(G_1(\\mathbf{x})))\\right].\n\\end{split}\\end{align}\nWhile $D_{Y_1}$ tries to maximize $\\mathcal{L}_{adv}$, $G_1$ tries to minimize it. \nAfterward, we use $F_1$ to translate $\\mathbf{\\hat{y}}_1$ back to the domain $X_{\\downarrow}$, and constrain $F_1(\\mathbf{\\hat{y}}_1=G_1(\\mathbf{x}))$ to be close to the input $\\mathbf{x}$. This can be formulated as a cycle-consistent loss:\n\\begin{equation}\n\\mathcal{L}_{cycle}(G_1, F_1, X_\\downarrow) = \\mathbb{E}_{\\mathbf{x}\\sim X\\downarrow}\\left\\|\\mathbf{x} - F_1(G_1(\\mathbf{x}))\\right\\|_1.\n\\end{equation}\n\nSimilarly, for a sample $\\mathbf{y}_1 \\in Y_{\\downarrow}$, we use $F_1$ to perform translation, use $D_{X_1}$ to calculate the adversarial loss, and then use $G_1$ to translate backward to calculate the cycle-consistent loss. Our full objective function for Stage-$1$ is a combination of the adversarial loss and the cycle-consistent loss:\n\\begin{dmath}\n\\mathcal{L}_{Stage1} =\n\\mathcal{L}_{adv}(G_1, D_{Y_1}, X_{\\downarrow}, Y{\\downarrow}) + \n\\mathcal{L}_{adv}(F_1, D_{X_1}, Y_{\\downarrow}, X_{\\downarrow}) + \n\\lambda[\\mathcal{L}_{cycle}(G_1, F_1, X_{\\downarrow}) +\n\\mathcal{L}_{cycle}(F_1, G_1, Y_{\\downarrow})],\n\\label{equ:loss}\n\\end{dmath}\nwhere $\\lambda$ denotes the weight of the cycle-consistent loss.\nWe obtain the translations $G_1$ and $F_1$ by optimizing the following objective function: \n\\begin{equation}\nG_1, F_1 = \\argmin_{G_1,F_1} \\max_{D_{X_1}, D_{Y_1}} \\mathcal{L}_{Stage1},\n\\label{equ:minmax}\n\\end{equation}\nwhich encourages these translations to transform the results to another domain while preserving the intrinsic image content. \nAs a result, the optimized translations $G_1$ and $F_1$ can perform a basic cross-domain translation in low resolution. \n\n\\subsection{Stage-2: Refinement}\n\\label{sec:stage2}\n\n\\begin{figure*}[t]\n\\begin{center}\n \\includegraphics[trim={0 0 0 0},clip,width=0.87\\linewidth]{overview.pdf}\n\\end{center}\n \\caption{Illustration of an overview of our Stage-$2$ for learning refining processes on the top of Stage-$1$'s outputs. $G_{1}$ and $F_{1}$ are the translation networks learned in Stage-$1$. \n In the training process, we keep the weights of $G_1$ and $F_1$ fixed. Solid arrow denotes an input-output, and dashed arrow denotes a loss.}\n\\label{fig:overview}\n\\end{figure*}\n\n\nSince it is difficult to learn a complicated translation with the limited ability of a single stage, the translated output of Stage-$1$ may seem plausible but still leaves us much room for improvement.\nTo refine the output of Stage-$1$, we deploy Stage-$2$ with a stacked structure built on the top of the trained Stage-$1$ to complete the full translation to generate higher resolution results with finer details.\n\n\nStage-$2$ consists of two translation networks $G_2$, $F_2$ and two discriminator networks $D_{X_2}$, $D_{Y_2}$, as shown in Figure \\ref{fig:overview}.\nWe only describe the architecture of $G_{2}$, since $F_{2}$ shares the same design (see Figure \\ref{fig:overview}).\n\n$G_2$ consists of two parts: a newly initialized image translation network $G_2^T$ and an adaptive fusion block $G_2^F$.\nGiven the output of Stage-$1$ ($\\mathbf{\\hat{y}}_1 = G_1(\\mathbf{x}_1)$), we use nearest up-sampling to resize it to match the original resolution.\nDifferent from the image translation network in Stage-$1$, which only takes $\\mathbf{x} \\in X$ as input, in Stage-$2$ we use both the current stage's input $\\mathbf{x}$ and the previous stage's output $\\mathbf{\\hat{y}}_1$. Specifically, we concatenate $\\mathbf{\\hat{y}}_1$ and $\\mathbf{x}$ along the channel dimension, and utilize $G_2^T$ to obtain the refined result\n$\\mathbf{\\hat{y}}_2 = G_{2}^T(\\mathbf{\\hat{y}}_1, \\mathbf{x})$.\n\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[width=0.65\\linewidth]{alpha.pdf}\n\\end{center}\n \\caption{Illustration of the linear combination in an adaptive fusion block. The fusion block applies the fusion weight map $\\bm\\alpha$ to find defects in the previous result $\\mathbf{\\hat y}_1$ and correct it precisely using $\\mathbf{\\hat y}_2$ to produce a refined output $\\mathbf{y}_2$.}\n\\label{fig:alpha}\n\\end{figure}\n\n\nBesides simply using $\\mathbf{\\hat{y}}_2$ as the final output, we introduce an adaptive fusion block $G_2^F$ to learn a dynamic combination of $\\mathbf{\\hat{y}}_2$ and $\\mathbf{\\hat{y}}_1$ to fully utilize the entire two-stage structure. Specifically, the adaptive fusion block learns a pixel-wise linear combination of the previous results: \n\\begin{equation}\nG_2^F(\\mathbf{\\hat{y}}_1, \\mathbf{\\hat{y}}_2) = \\mathbf{\\hat{y}_1} \\odot (1-\\bm{\\alpha}_{x}) + \\mathbf{\\hat{y}_2} \\odot \\bm{\\alpha}_{x},\n\\label{equ:alpha}\n\\end{equation}\nwhere $\\odot$ denotes element-wise product and $\\bm{\\alpha} \\in (0,1)^{H \\times W}$ represents the fusion weight map, which is predicted by a convolutional network $h_{x}$:\n\\begin{equation}\n\\bm{\\alpha}_{x} = h_x(\\mathbf{x}, \\mathbf{\\hat{y}}_1, \\mathbf{\\hat{y}}_2).\n\\label{equ:alphah}\n\\end{equation}\nFigure \\ref{fig:alpha} shows an example of adaptively combining the outputs from two stages. \n\nSimilar to Stage-$1$, we use a combination of adversarial and cycle-consistent losses to formulate our objective function of Stage-$2$:\n\\begin{dmath}\n\\mathcal{L}_{Stage2} = \\mathcal{L}_{adv}(G_2 \\circ G_1, D_{Y_2}, X, Y) + \\mathcal{L}_{adv}(F_2 \\circ F_1, D_{X_2}, Y, X) + \\lambda \\left[\\mathcal{L}_{cycle}(G_2 \\circ G_1, F_2 \\circ F_1, X) + \\mathcal{L}_{cycle}(F_2 \\circ F_1, G_2 \\circ G_1, Y)\\right].\n\\label{equ:loss2}\n\\end{dmath}\nOptimizing this objective is similar to solving Equation \\ref{equ:minmax}.\nThe translation networks $G_2$ and $F_2$ are learned to refine the previous results by correcting defects and adding details on them.\n\nFinally, we complete our desired translations $G$ and $F$ by integrating the transformations in Stage-$1$ and Stage-$2$, which are capable of tackling a complex image-to-image translation problem under the unsupervised setting.\n\n\\section{Experiments}\nThe proposed approach is named \\textit{SCAN} or \\textit{SCAN Stage-$N$} if it has $N$ stages in the following experiments.\nWe explore several variants of our model to evaluate the effectiveness of our design in Section \\ref{sec:ablation}. In all experiments, we decompose the target translation into two stages, except for exploring the ability of the three-stage architecture in high-resolution tasks in Section~\\ref{sec:highres}.\n\nWe used the official released model of CycleGAN~\\cite{zhu2017unpaired}\nand Pix2Pix~\\cite{isola2016image}\nfor $256\\times256$ image translation comparisions.\nFor $512\\times512$ tasks, we train the CycleGAN with the official code\nsince there is no available pre-trained model.\n\n\\subsection{Network Architecture}\n\nFor the image translation network, we follow the settings of \\cite{zhu2017unpaired,liang2017generative}, adopting the encoder-decoder architecture from Johnson \\textit{et al}.~\\cite{johnson2016perceptual}.\nThe network consists of two down-sample layers implemented by stride-2 convolution, six residual blocks and two up-sample layers implemented by sub-pixel convolution \\cite{shi2016real}.\nNote that different from \\cite{zhu2017unpaired}, which used the fractionally strided convolution as the up-sample block, we use the sub-pixel convolution \\cite{shi2016real}, for avoiding checkerboard artifacts \\cite{odena2016deconvolution}.\nThe adaptive fusion block is a simple 3-layer convolutional network, which calculates the fusion weight map $\\bm\\alpha$ using two Convolution-InstanceNorm-ReLU blocks followed by a Convolution-Sigmoid block.\nFor the discriminator, we use the PatchGAN structure introduced in \\cite{isola2016image}.\n\n\\subsection{Datasets}\n\nTo demonstrate the capability of our proposed method for tackling the complex image-to-image translation problem under unsupervised settings, we first conduct experiments on the Cityscapes dataset \\cite{cordts2016cityscapes}. \nWe compare with the state-of-the-art approaches in the \\textit{Labels $\\leftrightarrow$ Photo} task in $256 \\times 256$ resolution.\nTo further show the effectiveness of our method to learn complex translations, we also extended the input size to a challenging $512 \\times 512$ resolution, namely the high-resolution Cityscapes \\textit{Labels $\\rightarrow$ Photo} task.\n\nBesides the \\textit{Labels $\\leftrightarrow$ Photo} task, we also select six image-to-image translation tasks from \\cite{zhu2017unpaired}, including \\textit{Map$\\leftrightarrow$Aerial}, \\textit{Facades$\\leftrightarrow$Labels}\nand \\textit{Horse$\\leftrightarrow$Zebra}. We compare our method with the CycleGAN \\cite{zhu2017unpaired} in these tasks in $256 \\times 256$ resolution.\n\n\\subsection{Training Details}\n\nNetworks in Stage-$1$ are trained from scratch, while networks in Stage-N are trained with the \\{Stage-$1$, $\\cdots$, Stage-(N-1)\\} networks fixed. For the GAN loss, Different from the previous works \\cite{zhu2017unpaired,isola2016image}, we adopt a gradient penalty term $\\lambda_{gp}(||\\nabla D(x)||_2 - 1)^2$ in the discriminator loss to achieve a more stable training process~\\cite{kodali2017train}. \nFor all datasets, the Stage-$1$ networks are trained in $128 \\times 128$ resolution, the Stage-$2$ networks are trained in $256 \\times 256$ resolution. For the three-stage architecture in Section~\\ref{sec:highres}, the Stage-$3$ networks are trained in $512 \\times 512$ resolution.\nWe set batch size to $1$, $\\lambda = 10$ and $\\lambda_{\\text{gp}} =10$ in all experiments.\nAll stages are trained with $100$ epochs for all datasets. We use Adam \\cite{kingma2014adam} to optimize our networks with an initial learning rate as $0.0002$, and decrease it linearly to zero in the last $50$ epochs.\n\n\\subsection{Evaluation Metrics} \n\n\\noindent\\textbf{FCN Score and Segmentation Score.}\nFor the Cityscapes dataset, we adopt the FCN Score and the Segmentation Score as evaluation metrics from~\\cite{isola2016image} for the \\textit{Labels $\\rightarrow$ Photo} task and the \\textit{Photo $\\rightarrow$ Labels} task, respectively.\nThe FCN Score employs an off-the-shelf FCN segmentation network \\cite{long2015fully} to estimate the realism of the translated images.\nThe Segmentation Score includes three standard segmentation metrics, which are the per-pixel accuracy, the per-class accuracy, and the mean class accuracy, as defined in~\\cite{long2015fully}.\n\n\\noindent\\textbf{PSNR and SSIM.}\nBesides using the FCN Score and the Segmentation Score, we also calculate the PSNR and the SSIM\\cite{wang2004image} for a quantitative evaluation.\nWe apply the above metrics on the \\textit{Map $\\leftrightarrow$ Aerial} task and the \\textit{Facades $\\leftrightarrow$ Labels} task to measure both the color similarity and the structural similarity between the translated outputs and the ground truth images.\n\n\\noindent\\textbf{User Preference.} \\label{sec:userstudy}\nWe run user preference tests in the high-resolution Cityscapes \\textit{Labels $\\rightarrow$ Photos} task and the \\textit{Horse$\\rightarrow$Zebra} tasks\nfor evaluating the realism of our generated photos. \nIn the user preference test, each time a user is presented with a pair of results from our proposed \\proposedd~ and the CycleGAN~\\cite{zhu2017unpaired}, and asked which one is more realistic.\nEach pair of the results is translated from the same image.\nImages are all shown \nin randomized order.\nIn total, $30$ images from the Cityscapes test set and $10$ images from the Horse2Zebra test set are used in the user preference tests.\nAs a result, $20$ participates make a total of $600$ and $200$ preference choices, respectively.\n\n\\subsection{Comparisons} \\label{sec:comp}\n\n\\begin{figure*}[t]\n\\begin{center}\n \\includegraphics[width=\\linewidth]{city-realseg.pdf}\n\\end{center}\n \\caption{Comparisons on the Cityscapes dataset of $256\\times256$ resolution. The left subfigure are \\textit{Labels $\\rightarrow$ Photo} results and the right are \\textit{Photo $\\rightarrow$ Labels} results. In the \\textit{Labels $\\rightarrow$ Photo} task, our proposed \\proposedd~ generates more natural photographs than CycleGAN; in the \\textit{Photo $\\rightarrow$ Labels} task, \\proposedd~ produces an accurate segmentation map while CycleGAN's results are blurry and suffer from deformation. SCAN also generates results that are visually closer to those of the supervised approach Pix2Pix than results of CycleGAN. Zoom in for better view.}\n %\n\\label{fig:comp-city}\n\\end{figure*}\n\n\\begin{table}[t]\n\\small\n\\tabcolsep=0.11cm\n\\caption{FCN Scores in the Labels $\\rightarrow$ Photo task and Segmentation Scores in the Photo $\\rightarrow$ Labels task on the Cityscapes dataset. The proposed methods are named after \\textit{SCAN (Stage-1 resolution)-(Stage-2 resolution)}. \\textit{FT} means that we also \\textit{fine-tune} the Stage-1 model instead of fixing its weights. \\textit{FS} means directly training Stage-2 \\textit{from-scratch} without training the Stage-1 model.}\n\\begin{center}\n\\resizebox{0.85\\columnwidth}{!}{\n\\begin{tabular}{lcccccc}\n\\hline\n & \\multicolumn{3}{c}{\\textbf{Labels $\\rightarrow$ Photo}}\n & \\multicolumn{3}{c}{\\textbf{Photo $\\rightarrow$ Labels}} \\\\\n\\bf{Method} & \\bf{Pixel acc.} & \\bf{Class acc.} & \\bf{Class IoU} & \\bf{Pixel acc.} & \\bf{Class acc.} & \\bf{Class IoU} \\\\\n\\hline \nCycleGAN~\\cite{zhu2017unpaired} & 0.52 & 0.17 & 0.11 & 0.58 & 0.22 & 0.16 \\\\\nContrast-GAN~\\cite{liang2017generative} & 0.58 & \\bf0.21 & \\bf0.16 & 0.61 & 0.23 & 0.18 \\\\\n\\proposedd~ Stage-1 128 & 0.46 & 0.19 & 0.12 & 0.71 & 0.24 & 0.20 \\\\\n\\proposedd~ Stage-1 256 & 0.57 & 0.15 & 0.11 & 0.63 & 0.18 & 0.14 \\\\\n\\proposedd~ Stage-2 256-256 & 0.52 & 0.15 & 0.11 & 0.64 & 0.18 & 0.14 \\\\\n\\proposedd~ Stage-2 128-256 \\textit{FS} & 0.59 & 0.15 & 0.10 & 0.36 & 0.10 & 0.05 \\\\\n\\proposedd~ Stage-2 128-256 \\textit{FT} & 0.61 & 0.18 & 0.13 & 0.62 & 0.19 & 0.13 \\\\\n\\proposedd~ Stage-2 128-256 & \\bf0.64 & 0.20 & \\bf0.16 & \\bf0.72 & \\bf0.25 & \\bf0.20 \\\\\n\\hline\nPix2Pix~\\cite{isola2016image} & 0.71 & 0.25 & 0.18 & 0.85 & 0.40 & 0.32 \\\\\n\\hline\n\\end{tabular}\n}\n\\end{center}\n\\label{tab:city}\n\\end{table}\n\n\\noindent\\textbf{Cityscapes \\textit{Labels $\\leftrightarrow$ Photo.}}\nTable \\ref{tab:city} shows the comparison of our proposed method \\proposedd~ and its variants with state-of-the-art methods in the Cityscapes \\textit{Labels $\\leftrightarrow$ Photo} tasks. \nThe same unsupervised settings are adopted by all methods except Pix2Pix, which is trained under a supervised setting.\n\nOn the FCN Scores, our proposed \\proposedd~ Stage-2 128-256 outperforms the state-of-the-art approaches considering the pixel accuracy, while being competitive considering the class accuracy and the class IoU. \nOn the Segmentation Scores, \\proposedd~ Stage-2 128-256 outperforms state-of-the-art approaches in all metrics.\nComparing SCAN Stage-1 256 with CycleGAN, our modified network yields improved results, which, however, still perform inferiorly to SCAN Stage-2 128-256.\nAlso, we can find that \\proposedd~ Stage-2 128-256 achieves a much closer performance to the supervised approach Pix2Pix\\cite{isola2016image} than others.\n\nWe also compare our \\proposedd~ Stage-$2$ 128-256 with different variants of SCAN. \nComparing \\proposedd~ Stage-$2$ 128-256 with \\proposedd~ Stage-$1$ approaches, we can find a substantial improvement on the FCN Scores, which indicates that adding the Stage-$2$ refinement helps to improve the realism of the output images.\nOn the Segmentation Score, comparison of the \\proposedd~ Stage-$1$ 128 and \\proposedd~ Stage-$1$ 256 shows that learning from low-resolution yields better performance. Comparison between the \\proposedd~ Stage-$2$ 128-256 and \\proposedd~ Stage-$1$ 128 shows that adding Stage-2 can further improve from the Stage-1 results.\nTo experimentally prove that the performance gain does not come from merely adding model capacity, we conducted a \\proposedd~ Stage-2 256-256 experiments, which perform inferiorly to the SCAN Stage-2 128-256. \n\nTo further analyze various experimental settings, we also conducted our \\proposedd~ Stage-2 128-256 in two additional settings, including \\textit{leaning two stages from-scratch} and \\textit{fine-tuning Stage-1}. We add supervision signals to both stages for these two settings.\nLearning two stages from scratch shows poor performance in both tasks, which indicates joint training two stages together does not guarantee performance gain. The reason for this may lie in directly training a high-capacity generator is difficult.\nAlso, fine-tuning Stage-1 does not resolve this problem and has smaller improvement compared with fixing weights of Stage-1.\n\nTo examine the effectiveness of the proposed fusion block, we compare it with several variants: \n1) \\textit{Learned Pixel Weight} (LPW), which is our proposed fusion block; \n2) \\textit{Uniform Weight} (UW), in which the two stages are fused with the same weight at different pixel locations $\\mathbf{\\hat{y}_1} (1-w) + \\mathbf{\\hat{y}_2} w$, and during training $w$ gradually increases from 0 to 1; \n3) \\textit{Learned Uniform Weight} (LUW), which is similar to \\textit{UW}, but $w$ is a learnable parameter instead; \n4) \\textit{Residual Fusion} (RF), which uses a simple residual fusion $\\mathbf{\\hat{y}_1} + \\mathbf{\\hat{y}_2}$.\nThe results are illustrated in Table~\\ref{tab:add}. It can be observed that our proposed LPW fusion yields the best performance among all alternatives, which indicates that the LPW approach can learn better fusion of the outputs from two stages than approaches with uniform weights.\n\n\\begin{table}[t]\n\\small\n\\tabcolsep=0.11cm\n\\caption{FCN Scores and Segmentation Scores of several variants of the fusion block on the Cityscapes dataset.} \n\\begin{center}\n\\resizebox{0.8\\columnwidth}{!}{\n\\begin{tabular}{lcccccc}\n\\hline\n & \\multicolumn{3}{c}{\\textbf{Labels $\\rightarrow$ Photo}}\n & \\multicolumn{3}{c}{\\textbf{Photo $\\rightarrow$ Labels}} \\\\\n\\bf{Method} & \\bf{Pixel acc.} & \\bf{Class acc.} & \\bf{Class IoU} & \\bf{Pixel acc.} & \\bf{Class acc.} & \\bf{Class IoU} \\\\\n\\hline \nCycleGAN & 0.52 & 0.17 & 0.11 & 0.58 & 0.22 & 0.16 \\\\ \n\\proposedd~ 128-256 LPW & \\bf0.64 & \\bf0.20 & \\bf0.16 & \\bf0.72 & \\bf0.25 & \\bf0.20 \\\\\n\\proposedd~ 128-256 UW & 0.59 & 0.19 & 0.14 & 0.66 & 0.22 & 0.17 \\\\ \n\\proposedd~ 128-256 LUW & 0.59 & 0.18 & 0.12 & 0.70 & 0.24 & 0.19 \\\\\n\\proposedd~ 128-256 RF & 0.60 & 0.19 & 0.13 & 0.68 & 0.23 & 0.18 \\\\\n\\hline\n\\end{tabular}\n}\n\\end{center}\n\\label{tab:add}\n\\end{table}\n\nIn Figure \\ref{fig:comp-city}, we visually compare our results with those of the CycleGAN and the Pix2Pix. \nIn the \\textit{Labels $\\rightarrow$ Photo} task, \\proposedd~ generates more realistic and vivid photos compared to the CycleGAN. Also, the details in our results appear closer to those of the supervised approach Pix2Pix. \nIn the \\textit{Photo $\\rightarrow$ Labels} task, while \\proposedd~ can generate more accurate semantic layouts that are closer to the ground truth, the results of the CycleGAN suffer from distortion and blur.\n\n\\begin{figure*}[t]\n\\begin{center}\n \\includegraphics[trim={0 1.5cm 0 0},clip,width=0.72\\linewidth]{512multi.pdf}\n\\end{center}\n \\caption{Translation results in the \\textit{Labels $\\rightarrow$ Photo} task on the Cityscapes dataset of $512 \\times 512$ resolution. Our proposed \\proposedd~ produces realistic images that even look at a glance like the ground-truths. Zoom in for best view.}\n\\label{fig:512}\n\\end{figure*}\n\n\\smallskip\n\\noindent\\textbf{High-Resolution Cityscapes \\textit{Labels $\\rightarrow$ Photo}.} \\label{sec:highres}\nThe CycleGAN only considers images in 256$\\times$256 resolution, and results of training CycleGAN directly in 512$\\times$512 resolution are not satisfactory, as shown in Figure \\ref{fig:1} and Figure \\ref{fig:512}.\n\nBy iteratively decomposing the Stage-$2$ into a Stage-$2$ and a Stage-$3$, we obtain a three-stage SCAN.\nDuring the translation process, the resolution of the output is growing from $128 \\times 128$ to $256 \\times 256$ and to $512 \\times 512$, as shown in Figure \\ref{fig:1}. Figure \\ref{fig:512} shows the comparison between our \\proposedd~ and the CycleGAN in the high-resolution Cityscapes \\textit{Labels $\\rightarrow$ Photo} task. We can clearly see that our proposed \\proposedd~ generates more realistic photos compared with the results of CycleGAN, and SCAN's outputs are visually closer to the ground truth images. \nThe first row shows that our results contain realistic trees with plenty of details, while the CycleGAN only generates repeated patterns. \nFor the second row, we can observe that the CycleGAN tends to simply ignore the cars by filling it with a plain grey color, while cars in our results have more details.\n\nAlso, we run a user preference study comparing \\proposedd~ with the CycleGAN with the setting described in Section~\\ref{sec:userstudy}. \nAs a result, 74.9\\% of the queries prefer our SCAN's results, 10.9\\% prefer the CycleGAN's results, and 14.9\\% suggest that the two methods are equal. \nThis result shows that our \\proposedd~ can generate overall more realistic translation results against the CycleGAN in the high-resolution translation task.\n\n\n\\begin{table}[!t]\n\\begin{center}\n\\caption{PSNR and SSIM values in the \\textit{Map$\\leftrightarrow$Aerial} and \\textit{Facades$\\leftrightarrow$Labels} tasks.}\n\\label{tab:mapfacades}\n\\resizebox{0.75\\columnwidth}{!}{\n\\begin{tabular}{lcccccccc}\n\\hline\n & \\multicolumn{2}{c}{\\textbf{Aerial $\\rightarrow$ Map}} \n & \\multicolumn{2}{c}{\\textbf{Map $\\rightarrow$ Aerial}} \n & \\multicolumn{2}{c}{\\textbf{Facades $\\rightarrow$ Labels}}\n & \\multicolumn{2}{c}{\\textbf{Labels $\\rightarrow$ Facades}} \\\\\n\\textbf{Method} & \\textbf{PSNR} & \\textbf{SSIM} & \\textbf{PSNR} & \\textbf{SSIM} & \\textbf{PSNR} & \\textbf{SSIM} & \\textbf{PSNR} & \\textbf{SSIM} \\\\\n\\hline\nCycleGAN\\cite{zhu2017unpaired} & 21.59 & 0.50 & 12.67 & 0.06 & 6.68 & 0.08 & 7.61 & 0.11 \\\\\n\\proposedd~ & \\bf25.15 & \\bf0.67 & \\bf14.93 & \\bf0.23 & \\bf8.28 & \\bf0.29 & \\bf10.67 & \\bf0.17\\\\\n\\hline\n\\end{tabular}\n}\n\\end{center}\n\\end{table}\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[width=0.77\\linewidth]{mapfacades.pdf}\n\\end{center}\n \\caption{Translation results in the Labels$\\rightarrow$Facades task and the Aerial$\\rightarrow$Map task. Results of our proposed SCAN show finer details in both the tasks comparing with CycleGAN's results.}\n\\label{fig:mapfacade}\n\\end{figure}\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[width=\\linewidth]{horse2zebra.pdf}\n\\end{center}\n \\caption{\n Translation results in the Horse$\\leftrightarrow$Zebra tasks. CycleGAN changes both desired objects and backgrounds. Adding an identity loss can fix this issue, but tends to be blurry compared with those from SCAN, which never uses the identity loss.\n }\n\\label{fig:appleorange}\n\\end{figure}\n\n\\smallskip\n\\noindent\\textbf{\\textit{Map$\\leftrightarrow$Aerial} and \\textit{Facades$\\leftrightarrow$Labels}.}\nTable \\ref{tab:mapfacades} reports the performances regarding the PSNR\/SSIM metrics. \nWe can see that our methods outperform the CycleGAN in both metrics, which indicates that our translation results are more similar to ground truth in terms of colors and structures.\n\nFigure \\ref{fig:mapfacade} shows some of the sample results in the Aerial$\\rightarrow$Map task and the Labels$\\rightarrow$Facades task. \nWe can observe that our results contain finer details while the CycleGAN results tend to be blurry. \n\n\\smallskip\n\\noindent\\textbf{\\textit{Horse$\\leftrightarrow$Zebra}.}\nFigure \\ref{fig:appleorange} compares the results of SCAN against those of the CycleGAN in the Horse$\\leftrightarrow$Zebra task.\nWe can observe that both \\proposedd~ and the CycleGAN successfully translate the input images to the other domain. \nAs the Figure \\ref{fig:appleorange} shows, the CycleGAN changes not only the desired objects in input images but also the backgrounds of the images.\nAdding the identity loss~\\cite{zhu2017unpaired} can fix this problem, but the results still tend to be blurry compared with those from our proposed SCAN.\nA user preference study on Horse$\\rightarrow$Zebra translation is performed with the setting described in Section~\\ref{sec:userstudy}. \nAs a result, 76.3\\% of the subjects prefer our SCAN's results against CycleGAN's, while 68.9\\% prefer SCAN's results against CycleGAN+idt's.\n\n\\subsection{Visualization of Fusion Weight Distributions}\n\nTo illustrate the role of the adaptive fusion block, \nwe visualize the three average distributions of fusion weights ($\\bm{\\alpha}_{x}$ in Equation \\ref{equ:alpha}) over 1000 samples from Cityscapes dataset in epoch 1, 10, and 100, as shown in Figure \n\\ref{fig:alphaevol}. \nWe observed that the distribution of the fusion weights gradually shifts from left to right.\nIt indicates a consistent increase of the weight values in the fusion maps, which implies more and more details of the second stage are bought to the final output.\n\n\\subsection{Ablation Study} \\label{sec:ablation}\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[trim={0 2.5cm 0 -1cm},clip,width=0.7\\linewidth]{alpha_evol_map.pdf}\n\\end{center}\n \\caption{\n \\color{black}\n Distributions of fusion weights over all pixels in different epochs. Each distribution is an average result over 1000 sample images from the Cityscapes dataset. \n Dashed arrows indicate average weights of fusion maps.\n }\n\\label{fig:alphaevol}\n\\end{figure}\n\n\nIn Section \\ref{sec:comp}, we report the evaluation results of \\proposedd~ and its variants, here we further explore SCAN by removing modules from it: \n\\begin{itemize}\n\\item \\proposedd~ w\/o \\textit{Skip} Connection: remove the skip connection from the input to the translation network in the Stage-$2$ model \n, denoted by \\textit{\\proposedd~ w\/o Skip}.\n\\item \\proposedd~ w\/o Adaptive \\textit{Fusion} Block: remove the final adaptive fusion block in the Stage-$2$ model \n, denoted by \\textit{\\proposedd~ w\/o Fusion}.\n\\item \\proposedd~ w\/o \\textit{Skip} Connection and Adaptive \\textit{Fusion} Block: remove both the skip connection from the input to the translation network and the adaptive fusion block in the Stage-$2$ model \n, denoted by \\textit{SCAN w\/o Skip,} \\textit{Fusion}.\n\\end{itemize}\n\n\n\\begin{table}[t]\n\\caption{FCN Scores in the Cityscapes dataset for ablation study, evaluated on the \\textit{Labels $\\rightarrow$ Photo} task with different variants of the proposed SCAN.}\n\\begin{center}\n\\resizebox{0.72\\columnwidth}{!}{\n\\begin{tabular}{lccc}\n\\hline\n\\bf{Method} & \\bf{Pixel acc.} & \\bf{Class acc.} & \\bf{Class IoU} \\\\\n\\hline \n\\proposedd~Stage-1~ 128 & 0.457 & 0.188 & 0.124 \\\\\n\\proposedd~ Stage-2 128-256 w\/o Skip,Fusion & 0.513 & 0.186 & 0.125 \\\\\n\\proposedd~ Stage-2 128-256 w\/o Skip & 0.593 & 0.184 & 0.136 \\\\\n\\proposedd~ Stage-2 128-256 w\/o Fusion & 0.613 & 0.194 & 0.137 \\\\\n\\proposedd~ Stage-2 128-256 & \\bf0.637 & \\bf0.201 & \\bf0.157 \\\\\n\\hline\n\\end{tabular}\n}\n\\end{center}\n\\label{tab:fcn-alphaskip}\n\\end{table}\n\nTable \\ref{tab:fcn-alphaskip} shows the results of the ablation study, in which we can observe that removing either the adaptive fusion block or the skip connection downgrades the performance.\nWith both of the components removed, the stacked networks obtain marginal performance gain compared with Stage-$1$.\nNote that the fusion block only consists of three convolution layers, which have a relatively small size compared to the whole network.\nRefer to Table \\ref{tab:city}, in SCAN Stage-2 256-256 experiment, we double the network parameters compared to SCAN Stage-1 256, resulting in no improvement in the Label $\\rightarrow$ Photo task.\nThus, the improvement of the fusion block does not simply come from the added capacity.\n\nTherefore, we can conclude that using our proposed SCAN structure, which consists of the skip connection and the adaptive fusion block, is critical for improving the overall translation performance.\n\n\\section{Conclusions}\n\nIn this paper, we proposed a novel approach to tackle the unsupervised image-to-image translation problem exploiting a stacked network structure with cycle-consistency, namely SCAN.\nThe proposed \\proposedd~ decomposes a complex image translation process into a coarse translation step and multiple refining steps, and then applies the cycle-consistency to learn the target translation from unpaired image data.\nExtensive experiments on multiple datasets demonstrate that our proposed \\proposedd~ outperforms the existing methods in quantitative metrics and generates more visually pleasant translation results with finer details compared to the existing methods.\n\n\n\\clearpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\\section{Microcode Primitives}\n\\label{cucode:section:primitives}\n\nMicrocode programs supported by modern processors combined with the ability to\nupdate this microcode can provide a range of useful security primitives that can be used to build system defenses. \nIn the following, we explore several key primitives and discuss in\nSection~\\ref{cucode:section:case_study} how system defenses can be implemented based on\nour analysis results described in the previous section. \n\n\\par{\\bf Enabling or disabling CPU features at runtime}\n\nDespite recently uncovered security issues such as \\textsc{Spectre} and \\textsc{Meltdown}~\\cite{Lipp2018meltdown,projectzeromeltdownspectre,Kocher2018spectre},\nspeculative execution is an important feature that enables the performance of current \\ac{CPU} families.\nWhile the na\\\"ive coun\\-ter\\-mea\\-sure---disabling speculative execution completely---provides a high level of security,\nit significantly reduces the performance of a given system.\nHowever, if the speculative execution could be disabled only temporally or only for certain program states,\na trade-off between security and performance could be implemented.\n\nAnother example of a feature that can be used by both benign and malicious applications is the availability of high-resolution timers.\nSuch timers allow an attacker to abuse microarchitectural timing side channels to gather information from otherwise inaccessible contexts~\\cite{crypto:1996:kocher,Hund,pakt:2012,206170}.\nIn both cases, microcode can improve security by applying a fine-grained permission model on top of existing protection mechanisms\nby restricting features to certain applications or contexts only.\n\n\\par{\\bf Intercepting low-level \\ac{CPU} processes} \nA core functionality of microcode is the decoding of instructions.\nBy intercepting this step during the execution of x86 code, it is possible to apply fine-grained control over the behavior of\ninstructions, programs, and the system as a whole.\nFrom a security perspective, additional functionality can be added to existing instructions,\nspecial handling for corner cases can be inserted, and security checks can be included.\n\n\nBesides changing and extending the instruction decoding,\nit is also possible to influence other aspects of the \\ac{CPU}'s operation.\nFor example, the exception handling mechanism is implemented with the help of microcode.\nBefore an exception reaches the kernel-level x86 code, microcode can change the metadata passed to the kernel or handle the\nexception without involving the kernel at all.\nBy directly modifying the exception handling in microcode, expensive context switches can be avoided.\nThis allows, for example,\nspecial handling of page faults to implement page-based memory separation in a way that is completely transparent to the kernel.\n\n\\par{\\bf Isolated execution environment} \nThe microcode engine provides a fully-featured execution environment that cannot be\nintercepted by the running kernel in any way.\nAny exception delivered while microcode is running will be stalled until the current decoding is complete.\nMoreover, any state that is not explicitly written out will be contained in the microcode engine and cannot be accessed.\nMore specifically, both the running kernel and hypervisors are unable to inspect the execution state of the microcode engine.\nThis provides an enclave-like environment in which computations on sensitive data can be performed in an opaque way.\nOnly the results will be passed to the operating system,\nprotecting secret keys or other data inside the microcode.\n\n\\par{\\bf Extending and modifying the x86 instruction set} \nBy either reusing no longer used x86 instructions or adding entirely new\ninstructions to the decoding process,\nmicrocode can enable functionality not found in the standard x86 instruction set architecture.\nThese instructions can for example implement more complex semantics that are tailored to a specific use case.\nBy condensing calculations into fewer instructions, caches are utilized more effectively,\nincreasing performance.\nBesides performance improvements, new primitives can be added with new instructions.\nAs microcode can change the access level between operations,\nit is able read and write kernel-only data structures.\nCombining this with fine-grained checks enables fast access to otherwise privileged functions, without support of the running\nkernel.\n\n\\section{Case Studies of Microcode Defenses}\n\\label{cucode:section:case_study}\n\nBased on the security primitives discussed above, we now present designs and \\textit{proof-of-concept} implementations of our microcode-assisted systems defenses and constructive microcode applications. For each case study, we first briefly motivate the primitive, present the design and implementation, and conclude with an evaluation and discussion of advantages and drawbacks of our approach. Based on these case studies, we demonstrate that microcode indeed improves properties of those applications with regards to performance, security, and complexity.\nThe microcode programs and supporting infrastructure are publicly available~\\cite{microcode:amd_microprograms}.\n\nThe current state of the programs does not feature a mechanism for runtime configuration,\nhowever this is can be achieved in different ways.\nAs it is possible to load microcode updates during runtime,\nthe operating system can apply an update to enable or disable certain features.\nIt is also possible to add dedicated flags in the thread or process control structures created by\nthe operating system to signal which features should be enabled for a certain thread.\nHowever, both approaches require support from the OS to either synchronize the microcode update procedure\nacross all \\ac{CPU} cores or allocate and initialize the configuration fields for every new thread.\nAnother option is to use processor-internal storage to store configuration variables.\nTests showed that a writable region of memory exists that can be used to store these variables.\nUnfortunately, further experiments are needed to ascertain the nature\nof this memory region and the side effects of changing it at runtime.\n\nWe evaluate the performance of our case studies with microbenchmarks of the affected instructions. To this end, we determine the execution time in cycles as measured via the \\texttt{rdtsc} instruction.\nIt provides the value of the \\ac{TSC},\na counter which is incremented each clock cycle.\nThe used code snippet for performance benchmarks is illustrated in Figure~\\ref{ucode:figure:measurement}. All tests were performed on an AMD Sempron 3100+ running the minimal operating system developed by Koppe~et\\,al.\\xspace~\\cite{koppe2017reverse}. In the following, the cycle counts are given without the overhead of the measurement setup itself, which adds 65 cycles to every execution.\nFurther improvements to the performance properties of the defenses are possible with a greater understanding of the underlying hardware. This requires either more work on reverse engineering more details, especially in regards to scheduling, or, to fully utilize the existing hardware, assistance of the \\ac{CPU} vendors.\n\n{\\lstset{language=[x86masm]Assembler, deletekeywords={size}}\n\\begin{figure}\n\\begin{lstlisting}\nxor eax, eax\nxor edi, edi\ncpuid\nrdtsc\nxchg edi, eax\n\n; benchmarked instruction\nshrd ebp, ecx, 4\n\ncpuid\nrdtsc\nsub eax, edi\n\\end{lstlisting}\n\\caption{Microbenchmark setup to determine the execution time in cycles of \\texttt{shrd} (double precision right shift). The modern \\texttt{rdtscp} instruction variant is not available on the tested K8 CPU, thus the \\texttt{cpuid} instruction is used to serialize the instruction execution.}\n\\label{ucode:figure:measurement}\n\\end{figure}\n}\n\n\\subsection{Customizable RDTSC Precision}\n\\label{cucode:section:case_study:rdtsc}\n\n\\par{\\bf Motivation.}\nPrevious works demonstrated the possibility to reconstruct the memory\nlayout~\\cite{Hund,gras2017aslr,oren2015spy} using timing side channels.\nMore recently the \\textsc{Spectre} and \\textsc{Meltdown} attacks have shown in a spectacular\nway~\\cite{Kocher2018spectre,projectzeromeltdownspectre,Lipp2018meltdown} that it is possible\nto break the fundamental guarantees of memory isolation on modern systems.\nA common aspect of these attacks is the usage of high-resolution timers to observe the timing side channels.\nDue to these dangers,\nmodern browsers limit the accuracy of high-resolution timers to a recommended value~\\cite{w3c-rec-candidate}.\nWhile this does not eliminate all timing sources~\\cite{schwarz2017fantastic,kohlbrenner2016trusted},\nit raises implementation complexity of attacks and provides a mitigation against common exploits.\n\nOn the native level the timing information is commonly queried using the \\texttt{rdtsc} instruction.\nThe x86 architecture allows limiting \\texttt{rdtsc} to kernel space only.\nAny attempt of executing this instruction from user space will lead to a fault.\nBuilding upon this fact,\nthe operating system can limit the resolution of the timer available to user programs.\nUpon receiving the corresponding fault, the operating system queries the \\ac{TSC} itself,\nreduces the resolution accordingly, and passes the timestamp onto the program.\nNote that this incurs a significant performance overhead due to the necessary context switches.\n\n\\par{\\bf Design and Implementation.}\nSince we are able to change x86 microcode behavior,\nour goal is to implement a functionality similar to the\nbrowser mitigation for the native \\texttt{rdtsc} instruction.\nIn addition,\nour solution should be able to reduce the accuracy to a pre-defined value\nwithout incurring unnecessary overhead in form of context switches.\nTo this end,\nwe intercept the execution of \\texttt{rdtsc} and before the \\ac{TSC} value is made available to the application,\nwe set a pre-defined number of lower bits to zero.\nNote that the amount of zeroed bits is configurable (in the microcode\nupdate) to provide a trade-off between accuracy and security.\n\n\\par{\\bf Evaluation and Discussion.}\nWhile the default implementation of \\texttt{rdtsc} takes 7 cycles to execute,\nour custom implementation takes a total of 15 cycles to complete.\nThis overhead is due to the switch to microcode \\ac{RAM} and the additional\nlogical \\textsf{AND} operation to clear the lower bits of the \\ac{TSC} value.\nThe \\ac{RTL} representation of our \\texttt{rdtsc}\nimplementation is shown in the appendix in\nListing~\\ref{cucode:listing:rdtsc}.\n\nEven though our solution doubles the execution time, %\nit is far faster than the approach where the kernel needs to trap the raised interrupt.\nAt the same time,\nour security guarantees are comparable to the discussed browser mitigations.\nWhile raising the bar,\ntiming attacks are still possible by using methods described by\nSchwarz~et\\,al.\\xspace~\\cite{schwarz2017fantastic} and\nKohlbrenner~et\\,al.\\xspace~\\cite{kohlbrenner2016trusted}.\n\n\n\n\n\n\n\\subsection{Microcode-Assisted Address Sanitizer}\n\\label{cucode:section:case_study:hwasan}\n\\par{\\bf Motivation.}\n\\ac{ASAN}~\\cite{ASAN} is a compile-time instrumentation framework that introduces checks for every memory access in order to uncover both spatial and temporal software vulnerabilities. In particular, temporal faults such as \\textit{use-after-free} bugs present an important class of memory corruption vulnerabilities that have been used to exploit browsers and other software systems~\\cite{van2017dynamics}.\n\\ac{ASAN} tracks program memory state in a so-called \\textit{shadow\nmap} that indicates whether or not a memory address is valid.\nTherefore,\n\\ac{ASAN} inserts new instructions during compilation to perform the\nchecks as well as an instrumentation of allocators and deallocators.\nIn addition,\n\\ac{ASAN} enforces a quarantine period for memory regions and thus prevents them from being re-used directly.\nHowever,\nthis instrumentation incurs a performance overhead of roughly 100\\%.\n\nTo overcome the performance penalty and reduce the code size,\nthe authors of \\ac{ASAN} also discussed how a hardware-assisted version, dubbed \\ac{HWASAN}, could theoretically be implemented~\\cite{HWASAN}. The basic idea is to introduce a new processor instruction that performs access checks.\nThe general principle of the new instruction is illustrated in Figure~\\ref{ucode:figure:algorithms:hwasan}.\nIt receives two parameters: the pointer to be accessed and the memory access size.\nThe instruction then validates the memory access and its size with the help of the shadow map.\n\n{\\lstset{language=C}\n\\begin{figure}\n\\begin{lstlisting}\nCheckAddressAndCrashIfBad(Addr, kSize) {\n\tShadowAddr = (Addr >> 3) + kOffset;\n\tif (kSize < 8) {\n\t\tShadow = LoadByte(ShadowAddr);\n\t\tif (Shadow && Shadow <= (Addr & 7) + kSize - 1)\n\t\t\tReportBug(Addr);\n\t} else {\n\t\tShadow = LoadNBytes(ShadowAddr, kSize \/ 8);\n\t\tif (Shadow)\n\t\t\tReportBug(Addr);\n\t}\n}\n\\end{lstlisting}\n \\caption{Pseudocode of the \\ac{HWASAN} instruction~\\cite{HWASAN}; \\texttt{kSize} is the size of the memory access and \\texttt{kOffset} is a compile time constant that specifies the location of the shadow map.}\n\\label{ucode:figure:algorithms:hwasan}\n\\end{figure}\n}\n\n\\par{\\bf Design and Implementation.}\nInstead of requiring a hardware change to add the new \\ac{HWASAN} instruction,\nwe design a scheme to implement \\ac{HWASAN} in microcode.\nSimilarly to Figure~\\ref{ucode:figure:algorithms:hwasan},\nwe perform the checks accordingly and raise a fault in case an invalid memory access is detected.\nTo provide a clear separation between application code and instrumentation,\nwe implemented the checking in a single instruction.\nFor practical reasons,\nthe interface should be easy to add to existing toolchains.\n\nIn our implementation,\nwe chose to reuse an existing but unused x86 instruction, in this case the instruction \\texttt{bound}.\nSince the check requires address and size of the memory access,\nwe changed the interface of this instruction in the following way:\nthe first operand indicates the address to be accessed,\nwhile the second operand indicates the access size.\nWe want to emphasize that that our microcode instrumentation can be emitted\nwithout changes to an existing x86 assembler using the following syntax:\n{\\lstset{language=[x86masm]Assembler, deletekeywords={size}}\n\\begin{lstlisting}\nbound reg, [size]\n\\end{lstlisting}\n}\n\nSimilarly to \\ac{ASAN},\nour instruction is inserted in front of every memory access during compilation.\nWe also use the same shadow map mechanism and base address,\nhence the instrumentation requires no additional changes.\nHowever,\nthe key difference is the compactness and that no externally visible state is changed.\nIn case the memory access is valid,\nthe instruction behaves as a \\texttt{nop},\nbut if an invalid access is passed, a defined action is taken.\nTo this end, our prototype implementation currently support three methods of error reporting:\n\\begin{enumerate}\n \\item raising a standard access violation,\n \\item raising the \\texttt{bound} interrupt, and\n \\item calling a predetermined x86 routine.\n\\end{enumerate}\nNote that the first two options rely on the availability of an exception handling mechanism,\nwhile the latter option is self-contained and works even without kernel support.\n\n\\par{\\bf Evaluation and Discussion.}\nWhile the checking algorithm is semantically the same,\nwe observed a performance advantage of our solution.\nThe default \\ac{ASAN} implementation for a (valid) 4 byte load requires 129 cycles to complete,\nour version requires only 106 cycles.\nAnother advantage of our implementation is that no x86 register is changed during its execution:\ninstead of using x86 general purpose registers, our implementation\nstores temporary values in ephemeral microcode-internal registers.\nThis means the insertion of the instrumentation does not increase the register\npressure and does not cause additional register spills to the stack.\nThis is in comparison to the original \\ac{ASAN} implementation\nwhich uses two additional x86 registers to hold temporary values.\nThe overhead of additional register spills is not included in\nour benchmark as it is highly dependent on the surrounding code.\nThe \\ac{RTL} representation of our \\ac{HWASAN} implementation\ncan be found in our Github repository~\\cite{microcode:amd_microprograms}.\n\n\n\n\\subsection{Microcoded Instruction Set Randomization}\n\n\\par{\\bf Motivation.}\nIn order to counter so-called \\textit{code-injection} attacks,\na series of works investigated \\ac{ISR}\nschemes~\\cite{sovarel2005s,papadogiannakis2013asist,portokalidis2010fast,hu2006secure,barrantes2003randomized,kc2003countering}\nwith the goal of preventing the correct execution of maliciously injected code.\nTo this end,\nthe instruction encoding is randomized (e.g.,\nusing an \\textsf{XOR} with a pre-defined key) for all or a subset of instructions,\nso that the adversary does not know the semantics of a randomized instruction.\nNote that recently published advanced schemes also aim to mitigate\n\\textit{code-reuse} attacks using strong cryptographic encryption\nalgorithms~\\cite{sinha2017reviving}.\nHowever, most schemes require hardware support,\nwhich prevents their deployment to \\ac{COTS} \\acp{CPU}.\n\n\\par{\\bf Design and Implementation.}\nOur \\ac{ISR} scheme removes the link between the actual x86 operation and its semantics,\nand thus an adversary is unable to infer the meaning of an instruction\nstream even if disassembled during a \\ac{JIT}-\\ac{ROP} attack.\nIn order to be robust even when facing code-reuse or \\ac{JIT}-\\ac{ROP} attacks,\nwe assume fine-grained code randomization or software diversification.\n\nOur proof-of-concept implementation supports six different operations:\nmemory load, register move, add,\nleft and right shift,\nand exclusive or.\nEach operation can be freely assigned to any microcoded x86 instruction\nthat allows for one register operand and one memory operand.\nThis assignment effectively binds the executed x86 code to a specific instance of the \\ac{ISR}.\nExecution is only possible\nif the semantics implemented in microcode for each instruction match the one used when generating the x86 code.\nNote that due to this varying assignment and the variable instruction length of the affected opcodes,\nit is not possible to assemble a \\ac{ROP} chain or shellcode matching all possibilities.\nAdditionally, we support masking of input and output values before they\nare written to or read from potentially attacker-accessible storage,\nincluding system memory and registers.\n\nTo facilitate the translation of existing x86 code to opcodes using the newly introduced semantics of the \\ac{ISR},\nwe implemented a \\textit{transpiler}.\nThis transpiler processes a stream of disassembled x86 instructions and replaces all\noccurrences of supported opcodes with the appropriate opcodes with changed semantics.\nThe selection of the replacement opcode is performed based on the assignment in the corresponding microcode update.\nThe input to the transpiler is thus the source instruction stream and the\nmapping of x86 instructions to semantics as implemented by the \\ac{ISR},\nthe output is a modified instruction stream.\nThis output stream can them be assembled by a standard x86 assembler,\nas no new instructions are introduced.\n\n\\par{\\bf Evaluation and Discussion.}\nWe evaluate the performance of our implementation by comparing the runtime\n(measured in cycles according to the test setup described previously)\nof a toy example consisting only out of supported opcodes with the corresponding transpiled version.\nOur measurements indicate that our microcoded \\ac{ISR} scheme introduces\nan overhead of 2.5 times on average over a set of 5 different examples,\ncompared to the same code running natively.\nThis overhead is mainly due to replacing non-microcoded instructions (that normally\ntake 1-3 cycles) with microcoded instructions that require at least 7 cycles,\nincluding the additional overhead of switching to microcode \\ac{RAM} execution.\nWe provide one of the test cases in Listing~\\ref{cucode:listing:isr} in the appendix.\nNote that the cumulative performance of instruction streams may vary due to pipelining and parallel execution.\nThis is especially visible if instructions covered by the \\ac{ISR} are mixed with standard x86 instructions.\nAs our toy examples exclusively use transpiled instructions,\nwe arrive at the worst case overhead.\nSince the \\ac{ISR} can implement more complex semantics such as a multiply-accumulate,\nthe cycle overhead can be reduced with a more advanced transpiler.\nWe want to emphasize that our \\ac{ISR} does not require hardware changes compared\nto previous schemes and thus can be deployed on \\ac{COTS} \\acp{CPU} with a microcode update.\n\n\n\n\\subsection{Microcode-Assisted Instrumentation}\n\n\\par{\\bf Motivation.}\nTraditional binary defenses often suffer from either significant performance overhead or incompleteness.\nThis is typically due to the reliance on dynamic instrumentation or static binary rewriting.\nHowever,\nwith the ability to change the behavior of x86 instructions via a microcode update,\nit is possible to intercept only specific instructions without impacting performance of unrelated code.\nHence,\na microcode-assisted instrumentation leverages synergies of minimal performance overheads\nof static binary rewriting and completeness of dynamic instrumentation solutions.\n\n{\\captionsetup[figure]{skip=5pt}\n\\begin{figure}[!t]\n\t\\centering\n\t\\resizebox{0.65\\linewidth}{!}{\n\t\t\\includegraphics{figures\/instrumentation.png}\n\t}\n\t\\caption{Control flow of an instrumentation.}\n\t\\label{ucode:figure:primitives:instrumentation}\n\\end{figure}\n}\n\n\\par{\\bf Design and Implementation.}\nWe designed a microcode-as\\-sist\\-ed instrumentation scheme that allows generation of microcode updates that\nintercept a specific instruction and upon execution of this instruction, the control is transferred to a specific address.\nThis address contains standard x86 code to perform the instrumentation and finally resume execution.\nThe microcode update can additionally contain a custom-tailored filtering,\nso that the x86 handler is only invoked on specific conditions.\nAs the filtering is implemented directly in microcode,\nthe overhead of changing the x86 execution path which can invalidate\nbranch prediction and caches is only occurred when needed.\n\n\\par{\\bf Evaluation and Discussion.}\nTo test the viability of the instrumentation,\nwe implemented a proof-of-concept microprogram that instruments \\texttt{shrd} to call\nan x86 handler if a certain constant is detected in the argument register.\nThe control flow is illustrated in Figure~\\ref{ucode:figure:primitives:instrumentation}.\nUpon execution of the instruction,\n\\raisebox{.5pt}{\\textcircled{\\raisebox{-.9pt} {1}}}~control is transferred to the microcode \\ac{RAM}.\n\\raisebox{.5pt}{\\textcircled{\\raisebox{-.9pt} {2}}}~As a filter,\nwe check if the argument register is equal to a constant.\nIn case the filter does not match,\nthe instruction is executed normally and x86 execution continues after \\texttt{shrd}.\nIn case the filter matches,\n\\raisebox{.5pt}{\\textcircled{\\raisebox{-.9pt} {3}}}~the current instruction pointer\nis pushed onto the stack and the x86 instrumentation part gains control,\ncomparable to a \\texttt{call} instruction in x86.\nOnce our instrumentation gains control, it can perform any number of calculations\nand is not constrained by the size limitations of the microcode \\ac{RAM}.\n~\\raisebox{.5pt}{\\textcircled{\\raisebox{-.9pt} {4}}}~Finally,\nthe instrumentation continues the normal execution by returning to the interrupted code.\n\nWe also conducted a performance benchmark to determine the overhead introduced by our instrumentation\nfor the case where the microcoded condition does not hold --- illustrated with~\\raisebox{.5pt}{\\textcircled{\\raisebox{-.9pt} {2}}}~in Figure~\\ref{ucode:figure:primitives:instrumentation}.\nIn this case, the x86 execution should continue as fast as possible\nin order to reduce the overhead for any code not to be inspected.\nWe use the \\texttt{shrd} instrumentation for this test and measure the performance according to the described test setup.\nThe original implementation of \\texttt{shrd} executed in 2 cycles,\nour test case took 8 cycles.\nThis overhead is mainly due to the switch to microcode \\ac{RAM}\nand the two triads inserted for the instrumentation check.\nThe microcode \\ac{RTL} of the \\texttt{shrd} instrumentation is available in our Github repository~\\cite{microcode:amd_microprograms}.\n\nWhile the execution time of the single instruction is increased substantively,\nthis overhead is fixed for any semantic the instruction originally implements.\nThis implies that our instrumentation only adds 6 cycles to perform its own check,\nregardless of the original run time of the instruction.\nAdditionally, we do not introduce a conditional x86 branch,\nwhich further increases the overhead due to potential branch mis-predictions.\nMoreover, our implementation does not use scratch x86 registers and thus does\nnot increase register pressure or causes additional memory accesses.\nFinally, the overhead is only introduced for instructions that are to be inspected,\nthe rest of the execution is not impacted.\nThis is in contrast to existing dynamic instrumentation frameworks,\nsuch as Valgrind~\\cite{nethercote2007valgrind}, PIN~\\cite{luk2005pin} or DynamoRIO~\\cite{dynamorio},\nwhich increase the execution time for all instructions. For a lightweight instrumentation, the overheads induced by these tools are about 8.3, 2.5 or 5.1 times, respectively~\\cite{luk2005pin}.\n\nOn top of our framework,\nany binary instrumentation relying on intercepting of a \\textit{small} number x86 instructions can be realized.\nNote that a current limitation is that only microcoded instructions can be intercepted, however,\nthis is a limitation of the current reverse engineering progress.\nPrevious work indicated the possibility of intercepting all instructions,\nincluded non-microcoded ones.\n\n\n\n\\subsection{Authenticated Microcode Updates}\n\\label{cucode:section:microcode_update}\n\n\\par{\\bf Motivation.}\nWhile the insufficiently protected microcode update mechanism of\nAMD K8 and K10 processors enabled the research in the first place,\nit simultaneously poses a major security issue:\nan attacker can apply any update of her choosing,\nwhich was demonstrated by Koppe~et\\,al.\\xspace~\\cite{koppe2017reverse} by developing stealthy microcoded Trojans.\nHowever, as the microcode update mechanism itself is implemented in microcode,\nit is possible to develop a protection mechanism in the form of a microcode update that can provide limited security guarantees.\nWe implement a proof-of-concept that demonstrates the feasibility of such a scheme on the affected \\acp{CPU}.\n\n\\par{\\bf Design and Implementation.}\nIn order to mitigate the risk associated with the current scheme,\na microcode update mechanism is required that only accepts authenticated updates.\nHowever,\ngiven the ephemeral nature of microcode updates,\nthis countermeasure requires either a hardware re-design or a trusted application (e.g.,\na part of \\ac{UEFI} with secure boot) that applies a suitable microcode update early during boot.\nIn particular,\nthis update must then verify each further update attempt using proper cryptographic primitives.\nAt the same time,\ndue to the limited space in the microcode update,\nthe verification has to be small in terms of code size.\nNote that performance is of lesser priority in this case since\nmicrocode updates are typically only performed once per system start.\n\nOur implementation extends the \\texttt{wrmsr} instruction, which is used to start the microcode update, to enforce the following properties for the microcode update:\n\\begin{enumerate}\n \\item The update includes 32 triads, the maximum possible number on the K8 architecture. The vendor-supplied updates are always padded to this length.\n \\item A \\ac{HMAC} is appended to the update directly after the last triad.\n \\item The \\ac{HMAC} is correct for the full update, including the header. The inclusion of the header in the authenticated part protects the match registers and thus the affected instructions. The key of the \\ac{HMAC} is included in the initial microcode update.\n\\end{enumerate}\n\n\nFor our implementation,\nwe choose the block cipher \\ac{TEA}~\\cite{wheeler1994tea} due to the simplicity of\nits round function which results in a small code size in the microcode \\ac{RAM}.\nThis is especially important as our current understanding of microcode semantics\nonly allows loading of 16-bit immediate values per microcode operation.\nHence,\nloading of a single 64-bit constant requires a total of 8 operations or nearly\nthree triads (note that the whole microcode update is limited to 32 triads only).\nWhile it would be preferable to implement a strong cryptographic algorithm such as \\ac{AES},\nthese commonly require S-Boxes,\nwhich we cannot support due to code size constraints.\n\n\\par{\\bf Evaluation and Discussion.}\nAs we extend the standard update mechanism with an additional verification of the entire microcode update,\nwe incur a significant performance hit.\nIn our tests, applying a maximum length update takes 5,377 cycles without the authenticated update mechanism.\nWith our deployed authentication scheme, loading the same update requires 68,525 cycles.\nThis increase is expected due to the added verification.\nAs the update is only applied once during system boot, the performance hit is still negligible.\nFor comparison,\nthe AMD 15h architecture (Bulldozer etc.) requires 753,913 cycles on average for an update~\\cite{tr:2014:chen}.\nThis generation likely uses a public key scheme to verify the update.\n\nDue to code size limitation we were limited to the simple and small \\ac{TEA} algorithm and could not implement a public key verification scheme.\nHowever, if the update authentication mechanism were contained in the microcode \\ac{ROM} directly,\nthe code size would not be as restricted.\nWhile our \\ac{ROM} readout indicates a very high usage of the available triads,\nthere are still more padding triads present than would fit into a microcode update.\nIn our prototype implementation, the user can decide which updates to trust,\nor given the possibility to disassemble the updates,\neven which parts of an update should be applied.\nThis allows for a finer control over the hardware than what would be possible using only a vendor-accessible signature\nmethod.\nThe \\ac{RTL} of our microcode authentication scheme is available in our Github repository~\\cite{microcode:amd_microprograms}.\n\n\n\n\n\\subsection{$\\mu$Enclave}\n\n\\par{\\bf Motivation.}\nIntel \\ac{SGX}~\\cite{iacr:2016:86} is an instruction set extension that introduces\nthe creation of isolated, trusted execution environments with private memory regions.\nThese so-called \\textit{enclaves} are protected from processes even at\nhigh privilege levels and enable secure remote computation.\nInspired by \\ac{SGX} we designed and implemented a \\textit{proof-of-concept} enclave functionality,\ndubbed \\textit{$\\mu$Enclave}.\n$\\mu$Enclave can remotely attest that code indeed runs inside the enclave and ensures confidentiality of data.\nWe can thus retrofit basic enclave functionality to older \\acp{CPU} not offering a comparable solution.\nAdditionally, we use this case study to illustrate the isolation property of microcode.\n\n\n\\par{\\bf Design and Implementation.}\nWe leverage the separate microcode \\ac{IDU} to establish an isolated execution environment.\nThe other decode units are halted while the microcode \\ac{IDU} is active by design of the microarchitecture.\nDue to these isolation properties we can safely assume that x86 code,\neven when running with kernel-level privileges,\ncannot interfere with the enclave program implemented in microcode at run time.\n\n\n$\\mu$Enclave is based on the authenticated microcode update mechanism,\npresented in Section~\\ref{cucode:section:microcode_update},\nand the following strategy:\n\\begin{enumerate}\n \\item The trust is built upon the symmetric key contained in the first microcode update applied early during boot by \\ac{UEFI}. The entity controlling that key may be a chip manufacturer, software vendor, or the end-user. The entity has to ensure that payload microcode updates contain only benign behavior before signing it. \n\\item The program that is supposed to run in the $\\mu$Enclave is implemented in microcode and embedded in a signed payload microcode update.\n\\item The enclave program may perform arbitrary computations and access virtual memory. The enclave program may write sensitive data into \\ac{RAM}, but it must ensure security properties like authenticity, integrity, and secrecy itself using signing and encryption.\n\\item The enclave program can remotely attest that it indeed runs within the enclave by signing a message with the symmetric enclave key.\n\\end{enumerate}\n\n\\par{\\bf Discussion.}\nIn combination with a challenge-response protocol,\n$\\mu$Enclave enables remote attestation and additional services of the enclave\ncan be exposed either via augmenting x86 instructions or adding new \\acp{MSR}.\nA major drawback of $\\mu$Enclave is the restricted code size due to the microcode \\ac{RAM} size.\nThis limitation can be lifted by either implementing a small virtual machine and interpreting signed bytecode from\nmain memory or iteratively streaming signed microcode from main memory to microcode \\ac{RAM} as it executes.\nFor the latter, we are missing the micro-ops that can write to microcode \\ac{RAM}.\nWhile our current implementation does not support either approach,\nthis is not a fundamental limitation of $\\mu$Enclave.\n\nWhen compared to sophisticated trusted execution environments such as Intel \\ac{SGX} or ARM TrustZone,\n$\\mu$Enclave is more cumbersome to use.\nAs the enclave code needs to be written as microcode,\nthe development requires experience with this environment.\nAdditionally, the limited code size limits the selection of\ncryptographic primitives to those with very small implementations.\nThis results in the use of less secure cryptographic algorithms and thus lower security guarantees.\nFinally, the \\ac{CPU} lacks hardware support and acceleration for cryptographic operations.\nThis means, for example,\nthat the attestation needs to be implemented by the programmers of enclave code themselves.\nHowever, $\\mu$Enclave can be used on older \\acp{CPU}\nthat do not provide the mentioned vendor supplied solutions.\nAs such, it is possible to add similar primitives to legacy \\acp{CPU} without requiring a hardware change.\n\n\n\n\\section{Background and Related Work}\n\nIn the following, we first present the technical background information needed to understand the microcode details presented in this paper.\nNote that the background for the individual defenses is covered in their respective subsections in Section~\\ref{cucode:section:case_study}.\nIn addition, we review prior work that demonstrated the capabilities of microcode and discuss how our contributions presented in this paper relate to existing work.\n\n\\subsection{Microcode Background}\n\nThe \\ac{ISA} of a processor defines the available instructions and serves as an interface\nbetween software and hardware~\\cite{book:computer_architecture:stallings}.\nWe refer to the actual hardware implementation of an \\ac{ISA} as \\textit{microarchitecture}.\nThe \\ac{IDU} generates control words based on the currently decoded instruction and is a crucial\ncomponent of the microarchitecture especially for \\ac{CISC} processors with complex instructions.\nThe \\ac{IDU} of modern x86 processors is implemented as a hybrid of a hardwired decode unit,\nwhich consists of sequential logic,\nand a microcoded decode unit,\nwhich replays precomputed control words named \\textit{microinstructions}.\nThey are stored in a dedicated,\non-chip microcode \\ac{ROM}.\nThe microcode is organized in so-called \\textit{triads} containing three microinstructions and a \\textit{sequence word},\nwhich denotes the next triad to execute.\nIn the microcode address space, triads can only be addressed as a whole, i.e.,\nindividual bytes are not accessible.\nThere are multiple categories of microinstructions like arithmetic, logic,\nmemory load\/store, and special microinstructions.\n\nThe microcode of modern x86 processors can be updated at runtime in order to fix errata and add new features\nwithout the need to resort to product recalls~\\cite{patent:2002:patch_device,koppe2017reverse}.\nThese updates are usually applied early during boot by the BIOS\/EFI or operating system.\nThe process is initiated by loading the microcode update file to\nmain memory and writing the virtual address to a \\ac{MSR}.\nThe CPU then copies the microinstructions of the update to the dedicated on-chip microcode \\ac{RAM}.\nThe update engine also sets the \\textit{match registers} according to the values given in the update file.\nThe match registers contain microcode \\ac{ROM} addresses and act as breakpoints.\nThey redirect control to the triads of the update stored inside the\non-chip \\ac{RAM} once a breakpoint in microcode \\ac{ROM} is hit.\nComplex or rarely used x86 instructions are implemented with\nmicrocode and have a predefined entry point in microcode \\ac{ROM}.\nHence,\nmicrocoded x86 instructions can be intercepted by placing a breakpoint at the corresponding entry point.\nThe triads in the microcode update define the new logic of the x86 instruction.\n\n\n\n\\subsection{Related Work}\n\n\\par{\\bf Microcode and Microcode Updates.}\nPrevious work~\\cite{tr:2014:chen,hawkesmicrocode,link:2004:opteron_exposed} already provided indicators that the microcode\nupdate functionality of several \\acp{CPU} families is not sufficiently protected and might allow for custom updates to be applied.\nKoppe~et\\,al.\\xspace~\\cite{koppe2017reverse} then reverse engineered both the update mechanism of AMD K8 and K10 \\acp{CPU} as\nwell as the encoding of microcode to a point that allowed the creation of custom microcode updates.\nThese updates implemented simple microcode applications such as basic instrumentation and backdoors,\nwhich were applicable to unmodified \\ac{CPU}.\nOther work highlighting the capabilities of microcode was presented by Triulzi~\\cite{Arrigo:2016:Troopers,Arrigo:2015:Troopers},\nbut details of the implementation are not publicly available.\n\nIn this paper, we substantially extend on these insights and perform further in-depth reverse engineering and analysis of the microcode \\ac{ROM}. By understanding the \\ac{ROM} mapping, we are able to disassemble the microcode of arbitrary x86 instructions to enable the implementation of sophisticated microprograms, as demonstrated in later sections of this work.\n\n\\par{\\bf Microcoded Shadow Stacks.}\nDavi~et\\,al.\\xspace~\\cite{davi2015hafix} introduced an approach called \\ac{HAFIX} and showed that it is possible to implement a so-called \\textit{shadow stack}~\\cite{dang2015performance} using microcode (in cooperation with researchers from Intel).\nHowever, \\ac{HAFIX} relied both on a compile-time component to add additional instructions to the binary,\nand is only available on development \\acp{CPU},\nnot on standard consumer hardware.\nIntel also announced the introduction of shadow stacks into end user \\acp{CPU} with the addition of\n\\ac{CET}~\\cite{intel2016cet}.\nThis technology tracks all calls and returns which allows checking whether the normal stack and the shadow stack point to\nthe same return address.\nIf a difference is encountered,\nan exception is raised.\nAdditionally, the memory pages containing the shadow stacks are protected using special page table attributes.\nOnce \\acp{CPU} with this technology will reach the market,\nshadow stacks will be available in production code with (almost) no additional performance overhead.\n\nIn this paper, we present several designs and proof-of-concept implementations of microcode-assisted systems defenses beyond shadow stacks. In addition, our paper and the supplementary material~\\cite{microcode:amd_microprograms} will enable other researchers to build similar microcode-based system defenses and explore this area further.\n\n\n\\section{Discussion and Future Work}\n\\label{cucode:section:discussion}\n\nIn this section, we discuss benefits and challenges of microcode-assisted system defenses and review limitations of microcode in general and of our reverse engineering approach in particular. Furthermore, we present and discuss potential topics for future work such as microcode-assisted shadow stacks, lightweight syscalls as well as information isolation. We also shed light on how microcode Trojans can be detected. \n\n\\subsection{Microcode for System Defenses}\n\\label{cucode:section:discussion:defenses}\n\nModern processor microcode and the ability to update microcode can provide useful primitives such as enabling or disabling \\ac{CPU} features at runtime, intercepting instruction decoding or other microarchitectural processes to modify existing behavior, providing a small execution environment isolated from the operating system kernel, and bypassing some boundaries of the x86 \\ac{ISA} to implement new features. We have shown in Section~\\ref{cucode:section:case_study} that these primitives enable the implementation of some defensive schemes like customizable accuracy of the built-in x86 timer and $\\mu$Enclave in the first place. Other defenses such as microcoded \\ac{HWASAN} and \\ac{ISR} benefit from these primitives with regard to performance overhead and complexity. With more knowledge about microcode, additional defenses like opaque shadow stacks and information isolation can be built, as we discuss in Sections~\\ref{cucode:section:discussion:shadowstacks} and Section~\\ref{cucode:section:syscalls}. However, the generality of microcoded primitives suffers due to the limited number of processor models that currently accept custom microcode updates. We argue that the introduction of an open and documented microcode API could benefit system security research and future defensive systems. Such an API has to address several challenges like abstracting the underlying changes through processor generations, conflict handling for concurrent updates, and ensuring system stability. In order to avoid microcode malware, processor vendors could introduce an opt-in development mode that allows self-signed updates. Software vendors that want to use such an update in the field, e.g., with processors not in development mode, have to go through a signing process with the \\ac{CPU} vendor.\n\n\\subsection{Limitations}\n\\label{cucode:section:discussion:limitations}\n\nAt first, we review the limitations of microcode in general. The execution speed of certain computations can be speed up by several orders of magnitude by implementing the algorithm in hardware, e.g., in an ASIC or FPGA. Such performance gains do not apply to computations moved from an x86 implementation to microcode, because essentially it is still software. Merely the decoding is changed, but the resulting operations performed by the functional units of the processor are similar. Furthermore, the intervention of microcode in microarchitectural processes directly implemented in hardware is limited. Custom microcode updates are thus limited to changing the semantics of x86 instructions within the constraints of the existing internal \\ac{RISC} instruction set. To the best of our knowledge, no mechanisms exists to periodically trigger an action in microcode to implement an asynchronous monitoring. All actions of custom microcode programs needs to be triggered by an external event. However, as it is possible to intercept arbitrary instructions and microcode-internal processes, there are multiple options to implement a basic form of such a monitoring.\n\nOur microcode research is further limited due to our incomplete knowledge of microcode and the underlying microarchitecture. The information gained through reverse engineering may lack important details or even contain mistakes. This can only be resolved with access to the official documentation of the used features. Our microprograms only run on AMD K8 to K10 family based processors. More modern \\acp{CPU} include effective authentication schemes, such as RSA-based public key cryptography, which would need to be bypassed in order to apply a custom update. The microcode update size of the affected \\acp{CPU} is limited to 32 triads, which prohibits the implementation of large microprograms. We partly bypassed this restriction by introducing x86 callbacks. However, this bypass is not feasible in scenarios with untrusted operating system kernels such as $\\mu$Enclave. More recent \\acp{CPU} use larger microcode updates, which is an indication that their patch \\ac{RAM} is larger and can potentially accommodate more complex updates. Despite the limited code size on the tested \\acp{CPU} no upper bound on the execution time of microcode was encountered and we were able to lock up the \\acp{CPU} by forcing it into an endless loop in microcode. Furthermore, we currently can only hook microcoded x86 instructions. Detailed lists of these microcoded instructions for the K8 architecture can be found in~\\cite{amd:k8vectorpathlist} at pages 273ff. The instructions listed as VectorPath are microcoded instructions and Direct\/DoublePath instructions are decoded in hardware. While there are indications that it is possible to intercept all instructions, our current reverse engineering results do not allow for this. Lastly, the microcode \\ac{ROM} readout contains non-correctable read errors induced by dust particles or irregularities. We are currently working on improving the readout and obtaining an error-free version.\n\n\\subsection{Correctness of Reverse Engineering Results}\n\nAs our results are based on reverse engineering,\nwe can not guarantee their correctness. Additionally we are limited to observing the output of the \\ac{CPU}, any additional details of the microarchitecture such as scheduling or internal state updates are hidden from us.\nThe observations might constitute unintended behavior of the \\ac{CPU} when used outside of its specifications.\nHowever, we verified our conclusions using available resources where possible.\nA strong indication that our results are indeed correct\nis the fact that we can construct complex microcode programs that behave as expected when executed on the \\ac{CPU}.\nAdditionally the behavior is consistent between \\acp{CPU} of the AMD K8 and K10 families,\neven though they differ in details such as cache sizes, core counts,\nor feature size, and even certain implementation details such as the selection of microcoded instructions.\nThere are also parallels between our results and the descriptions found in\nthe patent describing the RISC86 instruction set~\\cite{patent:2002:risc86},\nwhich appears to be used internally by the \\ac{CPU}.\nFor example,\nthe encoding for the conditional codes of microcode jumps are the same as stated in the patent.\nWe also found similarities in the encoding of individual opcodes,\nalbeit with differences in length and number of opcode fields.\nLastly, certain operations,\nmost prominently multiple division variants or steps,\nand internal register functions,\ne.g.\nthe address of the next x86 instruction to be executed,\nare closely related.\nAfter reconstructing the mapping between virtual and physical microcode\naddresses we could also locate the implementation of specific x86 instructions.\nBy comparing the disassembled microcode with the expected function of the x86 instruction,\nwe determined that we indeed correctly interpret the bit sequences.\nExamples of this are the instructions \\texttt{shrd},\nwhose implementation shows shifts of the argument registers\naccording to the specifications and the \\texttt{wrmsr} opcode,\nwhich at its start has a large number of instructions comparing ECX (the register\nnumber to write to) to specific values consistent with the documented interface.\nWe also verified individual microcode instructions on their own by copying the bit sequences to a microcode update,\nexecuting them and comparing the output.\nThis was extended upon during the development of the microcode emulator for which we tested different\ninput states on both the emulator and the \\ac{CPU} to ensure the correctness of our emulation.\n\nA final confirmation of the correctness can be achieved with the cooperation of the \\ac{CPU} vendors.\nThe availability of official specifications and documentation would allow for a faster development\nof custom microcode programs and could potentially allow better usage of available \\ac{CPU} features.\nUnfortunately, we did not receive a response from AMD after we contacted them.\n\n\\subsection{Shadow Stacks}\n\\label{cucode:section:discussion:shadowstacks}\n\nDuring our research, we considered an opaque shadow stack implementation as a potent use case for a constructive microprogram. However, due to the fact that \\texttt{ret} (near, without immediate) is not implemented in microcode, we can not instrument this instruction. As this instruction is a key requirement in implementing an opaque shadow stack, we were unable to create a proof-of-concept. As \\ac{CPU} vendors are able to determine the logic on non-microcoded instructions during the design process, they are able to implement such a shadow stack. Below we discuss the advantages of an opaque shadow stack retrofitted by microcode. \n\nShadow stack defenses implement a second stack that is kept in sync with the system's default stack. Shadow stacks often possess special properties in order to achieve certain security goals. For example, the shadow stack can be placed in memory that cannot be accessed by normal program instructions~\\cite{kuznetsov2014code}, the direction of growth can be inverted to detect illegal stack accesses that yield diverging results~\\cite{salamat2008reverse}, or the shadow stack stores only fixed-size elements to preserve control-flow metadata in the event of a stack-based buffer overflow~\\cite{clang-safestack}. Shadow stacks ensure the integrity of sensitive data on the stack. Therefore, they are often integrated in code-reuse defenses such as CFI~\\cite{dang2015performance,clang-safestack,stackarmor:15,per-input-cfi:2015} in order to protect the backward edge of the control flow. Due to their nature, shadow stack implementations need to extend the logic of instructions operating on the stack such as \\texttt{call} and \\texttt{ret}. Software-based implementations achieve this by adding instructions at all occurrences during compilation~\\cite{kuznetsov2014code,dang2015performance,clang-safestack,gcc-safestack} or with static binary rewriting. In 2015, Davi~et\\,al.\\xspace~\\cite{davi2015hafix} proposed a hardware-assisted shadow stack implementation with low performance overhead. However, the defense still requires the insertion of instructions into the protected application.\n\nShadow stacks can also be implemented in an opaque way. The semantic of existing stack operations is extended rather than relying on the addition of instructions. Benefits of this approach are compatibility with legacy applications, protection of the whole software stack instead of transformed applications and software libraries only, and potential performance gains due to smaller code size as well as improved utilization of the underlying microarchitecture. Depending on the implementation details, stronger security properties can be enforced, e.g., by placing the shadow stack at a memory area not accessible by conventional user mode instructions. Intel released the specification of \\ac{CET} containing a shadow stack in 2016 and added GCC support in 2017~\\cite{intel2016cet,gcc2017cet}. However, to date no processor with \\ac{CET} support has been released. The \\ac{CET} shadow stack is opaque except for some new management instructions such as switch shadow stack. We argue that these management instructions will be microcoded, because they implement complex logic and are not performance critical due to their rare occurrence.\n\n\\subsection{Lightweight Syscalls}\n\\label{cucode:section:syscalls}\n\nThe syscall interface is provided by the processor and the operating system to offer services to user space. During its setup, the pointer to the syscall handler in kernel space and the kernel stack pointer are stored in \\acp{MSR}. Once the syscall instruction is invoked, the processor reads the corresponding \\acp{MSR}, switches the stack, and redirects control flow. The syscall handler then invokes the handler for the requested service according to the given syscall number in register \\texttt{eax}. The service handler sanitizes the inputs, checks access privileges (where applicable) and performs its desired action. Ultimately, control is transfered back to user space via the \\texttt{sysret} instruction by restoring segment registers, again switching stack and redirecting control to the stored instruction pointer.\n\nThe performance overhead imposed by syscalls discourages defenses from invoking them frequently. Thus, vital and critical runtime metadata of defenses are kept in the user space, where they are exposed to attackers. To thwart potential tampering with the metadata, many different kinds of information hiding schemes were introduced in the past years~\\cite{kuznetsov2014code,ASLR-Guard,dang2015performance,clang-safestack}. However, information hiding has been shown to be ineffective in several attack scenarios~\\cite{gawlik2016enabling,goktacs2016undermining,kollenda2017towards,evans2015missing}. We propose lightweight syscalls implemented in microcode, which are assigned to a dedicated opcode. They leave segment registers, the x86 instruction pointer, and the stack in place. Once the opcode is executed, the microcode implementation switches to kernel mode, performs a desired action, and switches back to user mode. The action is specific to the needs of the particular defense and could for example be a restricted read or write to the defense's metadata in kernel memory. Note that special care must be taken during implementation of the microcode update to not introduce a privilege escalation vulnerability. With lightweight syscalls, defenses such as \\ac{CFI} and \\ac{CPI} can migrate from information hiding to information isolation enforced by the privilege level of the processor. This can potentially further harden existing defenses against advanced adversaries. Due to the nature of lightweight syscalls, we estimate a low performance overhead. Based on our limited knowledge about microcode, we were unfortunately unable to implement and evaluate such an approach. Future work should explore such a microcode-based defense primitive.\n\n\n\n\n\n\n\n\n\\subsection{Microcode Trojan Detection}\n\\label{cucode:section:discussion:trojandetection}\n\nKoppe~et\\,al.\\xspace have shown that microcode updates can contain malicious behavior~\\cite{koppe2017reverse}. All presented microcode Trojans rely on the same mechanism to gain initial control, namely the interception of x86 instruction decoding. We found that the interception and the additionally executed micro-ops cause a measurable timing difference. In this paper, we showed that a related technique, namely microcode-assisted instrumentation, already exhibits a measurable performance overhead. Our further tests indicate that even if only a single triad---the smallest possible insertion---is inserted into the logic of an instruction, the overhead can already be measured. Given the unavoidable overhead of switching to the microcode \\ac{RAM}, a backdoor inserted via a microcode update is in general detectable.\n\nA detection engine can create a base line by measuring the timing of all instructions with no microcode update applied. Then the engine takes a second measurement with the update under test, compares the results, and reports any timing differences. Note that this method only detects x86 instruction hooks and not necessarily malicious behavior. A malicious update does not always need to insert additional logic into existing instructions, it could, for example, modify the handling of certain, potentially undocumented, \\acp{MSR}.\n\nIn order to also detect such modifications, the microcode update needs to be decoded and, for example, statically analyzed. Program analysis methods would also consider logic that is not inserted at instruction decoding but other internal processes like exception handling on the microarchitectural level. It is also possible to reason about the Trojan's semantics, thus yielding more accurate results. Trojans (or CPU vulnerabilities that can be exploited as backdoors) can also occur in the microcode \\ac{ROM}. The detection of these is more challenging, because their behavior is also contained in the baseline measurement and the \\ac{ROM} contents need to be read out to apply static analyses.\n\nHowever, the same problems that plague traditional malware identification are also applicable to the detection of microcode Trojans. Even if the whole microcode, both \\ac{ROM} and \\ac{RAM}, is available for analysis, it can be hard to determine if a certain code fragment is benign or malicious in nature. This problem is amplified due the limited understanding of microcode internals. But even access to the full documentation on the subject would not be sufficient, as it is possible to use obfuscation to hide the true nature of a code fragment. Lastly, it would be possible to insert a backdoor outside of the microcode engine and directly change the other functional units of the \\ac{CPU}. All-in-all detecting microcode Trojans---or hardware backdoors in general---is a difficult problem in the face of powerful adversaries. %\n\n\\subsection{Supporting Newer and Different Architectures}\n\nWhile we were able to apply our understanding of the K8 architecture\nto programming for the K10 architecture, other\narchitectures are far more difficult to support.\nAs the K10 is a close evolution of the K8,\nthe microcode engine remained largely the same.\nWe mainly noticed differences in the selection of microcoded instructions.\nFor example, the K10 architecture moved the decoding of all \\texttt{ret} instructions to hardware,\nwhile the K8 still performed decoding for some variants of it in microcode.\nMoving more instructions to the hardware decoder usually results\nin better performance as microcoded decoding takes more time.\nDuring our investigation we also determined that the entry points for microcoded instructions were constant between K8 and K10,\nbut the implementation then branched to different triads during execution.\n\nThe major problem when adapting our findings to new architectures is the strong cryptographic authentication of microcode updates for newer \\acp{CPU}.\nOnly with the ability to execute arbitrary code on the hardware,\nit was possible to gain an understanding of the fundamental encoding of microcode~\\cite{koppe2017reverse}.\nWithout such a possibility, any analysis is restricted to interpreting existing code,\nusually in the form of microcode updates.\nHowever, even the K8 and K10 architectures use a form of scrambling to obfuscate the plain text of the updates.\nAnalysis of more modern updates shows that those are most likely protected by strong\ncryptographic primitives~\\cite{hawkesmicrocode} and thus cannot be analyzed as is.\nHowever, even if the plain text of such an update is acquired,\nwithout a specification or a system to execute the code, it is still challenging to recover the microcode semantics.\nLarge amounts of data and at least some basic information on the intended functionality of the update would be needed to infer any meaning.\nGiven the comparatively small size of microcode updates (usually in the range of hundreds of kilobytes for a single \\ac{CPU}),\nthis would probably not be feasible in practice.\n\nAnother possibility is the analysis of the microcode \\ac{ROM} or engine directly.\nAnalyzing the engine itself would yield a detailed understanding\nof the encoding and available functionality of microcode,\nbut modern small feature sizes and the high complexity of current \\acp{CPU} render this approach difficult.\nWhile reading the \\ac{ROM} directly is not as difficult as analyzing a highly optimized microcode engine,\nit does not immediately yield the plain text microcode.\nAs our reverse engineering process showed,\nwe had to invert multiple permutations of the readout bits in order to obtain the plain text encoding.\nThis process was heavily dependent on both previous understanding of the\nencoding and the ability to execute chosen microcode on the \\ac{CPU},\nboth of which would not be available.\nAlso there would be no way of verifying the findings,\nas the \\ac{CPU} would not accept custom updates without the correct signature.\nWhile the public key of the signature could possibly be extracted from the \\ac{CPU},\nthe required private key would only be available to the vendor.\nModifying a single \\ac{CPU} via chip editing might resolve this issue,\nbut such an approach again requires massive hardware reverse engineering efforts\nand access to specialized and expensive lab equipment able to operate at the small feature size.\nAlso such an edit would only allow a single \\ac{CPU} to load the custom update,\nany unedited \\ac{CPU} would refuse it.\n\nIn summary, supporting newer \\acp{CPU} is mostly prevented by strong authentication of microcode updates.\nOnce the authentication is circumvented,\ne.g.,\nby the use of chip editing or side-channel attacks,\nour reverse engineering methods can be applied to infer microcode features.\nHowever, vendor support for custom microcode updates is still the\nmost viable approach to modifying the behavior of \\acp{CPU}.\n\n\\section{Conclusion}\n\nVulnerabilities affecting security and safety have accompanied computer systems since their early days. To cope with attacks, numerous defense strategies have been integrated both in software and hardware. In particular, hardware-based defenses implemented with microcode provide increased security and performance, as recently shown by the microcode updates released to address \\textsc{Spectre} and \\textsc{Meltdown}. However, little is publicly known how security mechanisms are implemented in hitherto closed-source microcode.\n\nIn this paper, we demonstrated how modern system security defenses and tools can be implemented in microcode on a modern \\ac{COTS} x86 \\ac{CPU}. Among others, we provided details how to implement timing attack mitigations, instruction set randomization, and enclave functionality. To this end, we first uncovered new x86 microcode details by a more in-depth hardware reverse engineering and novel strategies to validate the semantics. Finally, we discussed perspectives of customizable microcode and highlighted useful primitives offered by microcode to arm the system security defense landscape. \n\nIn order to foster future research in the area of processor microcode and its applications, we publish the source code of the applications described in this paper as well as the framework used for manipulating and generating microcode~\\cite{microcode:amd_microprograms}. We hope this will enable other researchers to extend and build upon our work to design and implement microprograms.\n\n\\section*{Acknowledgement}\nWe thank our shepherd Mathias Payer and the anonymous reviewers for their valuable feedback.\nPart of this work was supported by the European Research\nCouncil (ERC) under the European Union's Horizon 2020\nresearch and innovation programme (ERC Starting Grant No.\n640110 (BASTION) and ERC Advanced Grant No. 695022 (EPoCH)).\nIn addition, this work was partly supported by the German Federal Ministry of Education and Research (BMBF Grant 16KIS0592K HWSec and BMBF Grant 16KIS0820 emproof).\n\n\\balance\n\n\n\n\\section{Introduction}\n\nNew vulnerabilities, design flaws, and attack techniques with devastating consequences for the security and safety of computer systems are announced on a regular basis~\\cite{cvestats}.\nThe underlying faults range from critical memory safety violations~\\cite{cve:stats:memory} or input validation~\\cite{cve:stats:validation} in software to race conditions or side-channel attacks in the underlying hardware~\\cite{intel:errata,amd:errata,Kocher2018spectre,Lipp2018meltdown,projectzeromeltdownspectre,Hund,Doychev:2013:CTS}. To cope with erroneous behavior and to reduce the attack surface, various defenses have been developed and integrated in software and hardware over the last decades~\\cite{van2017dynamics,Szekeres:2013:EWM}.\n\nGenerally speaking, defenses implemented in software can be categorized in either compiler-assisted defenses~\\cite{ASAN,aslr,dep,onarlioglu2010g,ASLR-Guard,readactor:sp15,backes2014oxymoron} or binary defenses~\\cite{wartell2012binary,pappas2012smashing,Gawlik,abadi2005control,Isomeron:ndss15}. Note that operating system changes~\\cite{XnR:2014,readactor:sp15,aslr,dep} represent an orthogonal approach to serve both compiler-assisted and binary defenses. While compiler-assisted defenses require access to the source code and re-com\\-pi\\-la\\-tion of the software, binary defenses based on \\textit{static binary rewriting}~\\cite{wang2015reassembleable,laurenzano2010pebil,romer1997instrumentation} or \\textit{dynamic instrumentation}~\\cite{dyninst:2011,luk2005pin,nethercote2007valgrind,dynamorio} can also be leveraged for legacy and \\ac{COTS} programs. However, these binary defense strategies have two fundamental drawbacks: on the one hand, binary rewriting relies on the ability to accurately discover and disassemble all executable code in a given binary executable~\\cite{andriesse2016depth}. Any misclassified code or data yields incomplete soundness and thus cannot provide specific security guarantees, causes program termination, or incorrect computations. On the other hand, dynamic instrumentation executes unmodified binaries and inserts instrumentation logic with methods such as \\textit{emulation} or \\textit{hooking} during runtime. While this approach does not require the availability of a perfect disassembly, it typically causes significant performance overheads and thus can be prohibitively expensive in practice.\n\nOver the past decades, various defense mechanisms have been implemented in hardware to increase both security and performance. For example, dedicated security features to mitigate exploitation of memory-corruption vulnerabilities include Data Execution Prevention~\\cite{dep}, \\ac{XoM}~\\cite{XnR:2014,readactor:sp15,intelsdm}, \\ac{CFI}~\\cite{abadi2005control,intel2016cet} and Shadow Stacks~\\cite{intel2016cet,dang2015performance}. Moreover, sophisticated trusted computing security features were integrated in \\acp{CPU}~\\cite{anati2013innovative,iacr:2016:86}. \n\nBut not only novel defense mechanisms have been integrated in hardware: Similarly to any complex software system, erratic behavior exist in virtually any commercially-available \\ac{CPU}~\\cite{intel:errata,amd:errata}. To this end, x86 \\ac{CPU} vendors integrated in-field update features (e.g., to turn off defective parts or patch erroneous behavior). More precisely, the microcode unit, which translates between user-vis\\-i\\-ble \\ac{CISC} \\ac{ISA} and hardware-internal \\ac{RISC} \\ac{ISA}, can be updated by means of so-called \\textit{microcode updates}~\\cite{patent:2002:patch_device,koppe2017reverse}.\nSince microcode is proprietary and closed source, and more and more complex security features are integrated into hardware with the help of microcode (e.g., Intel SGX~\\cite{iacr:2016:86}), there is only a limited understanding of its inner workings and thus we need to trust the \\ac{CPU} vendors that the security mechanisms are implemented correctly. In particular, the \\ac{CPU}'s trustworthiness is challenged since even recently published microcode updates have been shown to cause incorrect behavior~\\cite{intelspectreretract} and several attacks on hardware security features have been demonstrated recently~\\cite{Kocher2018spectre,Lipp2018meltdown,projectzeromeltdownspectre,lee2017inferring,206170}. Moreover, since older \\ac{CPU} generations are not updated to defend against sophisticated attacks such as \\textsc{Spectre} or \\textsc{Meltdown}~\\cite{intelspectrepatch}, these \\acp{CPU} are unprotected against the aforementioned attacks which find more and more adoption into real-world attacks~\\cite{fortinetmdspecmw}.\n\n\\par{\\bf Goals and Contributions.}\nIn this work, we focus on constructive applications of x86 processor microcode for the modern system security landscape. \nOur goal is to shed light on how currently employed defenses may be realized using microcode and thus tack\\-le shortcomings of the opaque nature of x86 \\acp{CPU}.\nBuilding upon our recent work on microcode~\\cite{koppe2017reverse}, we first present novel reverse engineering strategies which ultimately provide fine-grained understanding of x86 microcode for a \\ac{COTS} AMD K8 \\ac{CPU}. \nOn this basis, we demonstrate multiple constructive applications implemented in microcode which considerably reduce the attack surface and simultaneously reduce performance overheads of software-only solutions. \nFinally, we discuss benefits and challenges for customizable microcode for future systems and applications. \n\n\\smallskip \\noindent\nIn summary, our main contributions are: \n\\begin{itemize}\n\n\\item {\\bf Uncovering New x86 Microcode Details.}\nWe present new reverse engineering results that extend and complement the publicly available knowledge of AMD K8 \\ac{CPU} microcode technology, specifically its microcode \\ac{ROM}. \nTo this end, we develop a novel reverse engineering strategy that combines chip-level reverse engineering and image processing with a custom microcode emulator in order to recover and validate microcode semantics in a semi-automatic fashion. \nIn particular, this reverse engineering step enables us to better understand the hitherto opaque microcode by analysis of its \\ac{ROM} and microcode updates. %\n\n\\item {\\bf Perspectives of Customizable Microcode.}\n We analyze the capabilities of microcode and its updates to identify building blocks that can be used to strengthen, extend, or supplement system security defenses. This includes microcode-based methods to enable or disable CPU features at runtime, a method to intercept low-level CPU processes, an isolated execution environment within the microcode engine, and the possibility to extend and modify the x86 \\ac{ISA}.\nWith regards to the trustworthiness of systems, we discuss a method to detect the presence of microcode backdoors and the challenges associated with such a detection.\n\n\n\\item {\\bf Implementation of Microcode-Assisted Defenses.}\nWe show how modern system defenses and tools can be implemented with microcode on a \\ac{COTS} AMD x86 \\ac{CPU} using the identified primitives. \nTo this end, we implemented several case studies to demonstrate that timing attack mitigation, hardware-assisted address sanitization, and instruction set randomization can be realized in microcode. \nIn addition, we realize a microcode-assisted hooking framework that allows fast filtering directly in microcode.\nFinally, we show how a secure microcode update mechanism and enclave functionality can be implemented in microcode.\nThe framework used for the deconstruction\n and manipulation of microcode, including the assembler and disassembler, as well as our created microcode programs and the microcode emulator are publicly available at \\url{https:\/\/github.com\/RUB-SysSec\/Microcode}~\\cite{microcode:amd_microprograms}.\n\\end{itemize}\n\n\n\n\n\\section{Microcode Reverse Engineering}\n\nA key contribution of our work presented in this paper is to further analyze the \\ac{ROM}\nreadouts provided by Koppe~et\\,al.\\xspace~\\cite{koppe2017reverse} to gather more details on the\nimplementation of both microcode itself and---more importantly---on the microcoded instructions.\nWhile the authors were able to identify operations and triads in the readout,\nthey were unable to reconstruct how they map to logical addresses.\nTherefore,\nthey could not locate and analyze the microcode that implements a \\emph{specific} x86 instruction.\nHowever,\nthese steps are crucial in the hooking of more advanced x86 instructions that\nrequire knowledge of the underlying implementation in the microcode \\ac{ROM}.\nThe analysis of existing microcode implementations was essential for\nthe case studies presented in Section~\\ref{cucode:section:case_study}.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\resizebox{\\linewidth}{!}{\n\t\t\\includegraphics[scale=0.01]{figures\/re_process.png}\n\t}\n\t\\caption{High-level overview of the individual steps of the \\ac{ROM} reverse engineering process.}\n\t\\label{cucode:figure:rom:re_process}\n\\end{figure}\n\nThe key requirement for such an analysis is the ability to locate\nthe corresponding implementation in the microcode \\ac{ROM}.\nWe therefore require a mapping of observable addresses to the physical location in the \\ac{ROM} readout.\nGoing forward,\nwe define two different classes of addresses:\n\\begin{itemize}\n \\item \\emph{logical addresses} are used when the microcode software refers to a specific triad (e.\\,g.\\xspace, in the match registers or jumps)\n\t\\item \\emph{physical addresses} are the addresses assigned to triads in the ROM readouts during analysis.\n\\end{itemize}\nThese addresses are not related to the virtual and physical addresses used when addressing the main memory---what is\ncommonly known as the \\emph{virtual memory layout} of processes.\nAlso note that the address granularity for microcode is one triad,\nthe individual operations forming a triad are not addressable.\n\nThus, it is our goal to reverse engineer the algorithm used to map a given logical address to its corresponding physical\naddress.\nThe high level overview of this process is illustrated in Figure~\\ref{cucode:figure:rom:re_process}.\nWe used the following steps to recover the ordered microcode \\ac{ROM}:\n\\begin{itemize}\n\t\\item \\raisebox{.5pt}{\\textcircled{\\raisebox{-.9pt} {1}}}~Convert \\ac{SEM} images\n\tof each region to bitstrings with the aid of image recognition software.\n\t\\item \\raisebox{.5pt}{\\textcircled{\\raisebox{-.9pt} {2}}}~Reorder\n\tand combine the resulting bitstrings into a list of unordered triads.\n\t\\item \\raisebox{.5pt}{\\textcircled{\\raisebox{-.9pt} {3}}}~Reconstruct the mapping between logical\n\tand physical microcode addresses as well as reorder the triads according to this mapping.\n\t\\item \\raisebox{.5pt}{\\textcircled{\\raisebox{-.9pt} {4}}}~Disassemble the resulting triad list into a continuous,\n\tordered stream of instructions.\n\\end{itemize}\nThe first step,\nthe conversion of images to bitstrings,\nwas already performed by Koppe~et\\,al.\\xspace~\\cite{koppe2017reverse} and we used this data as our starting point for our further analysis.\nThe authors also already combined parts of the readouts into triads.\nWe build upon this and recovered the remaining part of the triads,\nwhich is depicted as step \\raisebox{.5pt}{\\textcircled{\\raisebox{-.9pt} {2}}}~in the figure.\nThe details of this step are described in\nSections~\\ref{cucode:section:re:layout}\nand~\\ref{cucode:section:re:ordering}.\nStep \\raisebox{.5pt}{\\textcircled{\\raisebox{-.9pt} {3}}},~the recovery of the mapping algorithm,\nconstituted the majority of our efforts.\nWe outline the approach we used in Section~\\ref{cucode:section:re:approach}\nand provide details of the solutions we developed in the following sections.\nThe mapping was reverse engineered for an AMD K8 processor.\nHowever, our approach is also applicable to the K10 architecture based on the\nsimilarities between the two architectures.\nFor the last step, we extended the disassembler used by Koppe~et\\,al.\\xspace~\\cite{koppe2017reverse} to include details learned during our own analysis.\n\n\n\n\\begin{figure}[!t]\n\t\\centering\n\t\\resizebox{\\linewidth}{!}{\n\t\t\\includegraphics{figures\/ucode_Overview.png}\n\t}\n\t\\caption{\\ac{SEM} image of region R1 showing arrays A1 to A4 and the SRAM holding the microcode update. The higher resolution raw image is available in Appendix~\\ref{cucode:section:appendix:rom}.}\n\t\\label{cucode:figure:rom:region}\n\\end{figure}\n\n\\subsection{Physical Layout}\n\\label{cucode:section:re:layout}\nThe physical storage is composed of three larger regions of \\ac{ROM} (R1 to R3),\nwhich were identified as the area containing the operations,\nand a smaller region (R4) containing the sequence words.\nPrevious work~\\cite{koppe2017reverse} already performed permutations such as inversion and interleaving of bit rows to receive whole operations in the correct bit order.\nIn addition, the algorithm for constructing triads out of three operations was known.\nThe triads are built by loading a single operation out of each of the three regions R1 to R3 and loading the corresponding\nsequence word from region R4. Thereby, the operations belonging to one triad have the same offset relative to the start of their corresponding region.\nThe different subregions of a single \\ac{ROM} region are illustrated in Figure~\\ref{cucode:figure:rom:region}, more technical details are provided in Appendix~\\ref{cucode:section:appendix:rom}.\nWe will use the same naming convention in the following.\n\nThe hardware layout suggested that the triads are organized in four arrays (A1 to A4), with A1,\nA3 and A4 containing data for 1024 triads each and A2,\nwhich is physically smaller than the other arrays,\nfor 768 triads.\nThis organization means that the first triad\nwill use bits extracted from R1:A1,\nR2:A1 and R3:A1 as its operations and the sequence word is obtained from the bits located in R4:A1.\nAs the regions are no longer relevant after combining the triads,\nthey will be omitted in further notations.\nEach of the arrays is subdivided into blocks B1 to B4, each containing 256 triads.\nThe exception to this is the array A2:\nwhile the hardware layout suggests the presence of four blocks with a smaller number of triads each,\nwe mapped the contents to three blocks with 256 triads each.\nThis means array A2 contains only 768 triads in contrast to the 1024 triads contained in the other arrays.\n\nWe were also able to locate the microcode\npatch \\ac{RAM},\nwhich is loaded with the microcode updates during runtime.\nThe \\ac{RAM} needs to be placed physically close to the rest of the microcode engine to keep signal paths short,\nhowever previously it was unknown where exactly it is located.\nUsing new images taken with a \\ac{SEM}, we could classify the area between arrays A2 and A3 as \\ac{SRAM}.\nThe area is marked in Figure~\\ref{cucode:figure:rom:region}.\nWe determined the storage type based on detailed images of the region\nand additional cross-section images.\nBoth showed visual structures specific to \\ac{SRAM}.\nThis location also contains a visually different control logic,\nwhich also indicates a different type of storage than the rest of the region.\nA higher resolution image and additional details are available in Appendix~\\ref{cucode:section:appendix:rom}.\nIt should be noted that the usage of two different classes of storage in this close proximity implies a highly optimized hardware layout.\nThe SRAM marked in the figure contains $32\\times64$ bits,\nwhich is the amount of data needed per region for 32 triads.\nThis corresponds to the maximum update size of 32 triads determined in our experiments.\nDue to the additional complexity of implementing a fast readable and writable memory in hardware,\nthe \\ac{SRAM} occupies roughly the same space as a \\ac{ROM} block with 256 triads.\n\n\\subsection{Physical Ordering}\n\\label{cucode:section:re:ordering}\nAnother insight gained from the available readout was that not only the three operations forming a triad exhibited data\ndependencies between each other (suggesting that the triads are indeed correctly combined),\nbut in some cases data flow was visible between triads immediately following each other.\nThis means the readout already partially placed related triads close to each other.\nBased on this observation, we retained the triad order and by convention placed all triads after one another with increasing\naddresses.\nThis yielded what we considered a continuous physical memory space with addresses starting at 0 and increasing with each\ntriad to 0xEFF.\nThis corresponded with the observation that the microcode patch RAM starts at the address 0xF00 for the K8 series of\nprocessors.\n\nOur physical memory space assumed an arbitrary ordering of A1 -- A3 -- A4 -- A2,\nso A1 would contain addresses from 0x0 to 0x3FF,\nA3 from 0x400 to 0x7FF,\nA4 from 0x800 to 0xBFF and A2 from 0xC00 to 0xEFF.\nWe placed A2 last\nbecause it contained less triads which we assumed to be missing at the end of the addressable space.\nIn each array, we ordered the blocks starting from the bottom of the image in Figure~\\ref{cucode:figure:rom:region},\nomitting the missing block B4 in array A2.\nPhysical address 0x0 is thus located in A1:B1 and 0xEFF in A2:B3.\n\n\\subsection{Mapping Recovery Approach}\n\\label{cucode:section:re:approach}\nOur recovery approach is based on inferring the mapping based on address pairs.\nWe chose this approach because it was infeasible to recover the mapping via hardware analysis.\nThe addressing logic is complex and the connections span multiple layers, each of which would require delayering and subsequent imaging.\nEach address pair maps a logical (microcode) address to a physical address.\nOnce the recovered function correctly produces the physical address for any given logical address in our test set,\nwe can assume that it will be correct for any further translations.\nWe thus needed a sufficiently large collection of address pairs.\nUnfortunately, the microcode updates only provided two usable data points.\n\nTherefore, we developed an approach that (i) executes all \\ac{ROM} triads on the \\ac{CPU} individually and extracts the observable semantics of a given logical address, (ii) emulates each triad we acquired from the physical \\ac{ROM} readout in a custom microcode emulator to extract the semantics for a given physical address, and (iii) correlates the extracted semantics to find matching pairs of physical and logical addresses. Details of this process are described in Section~\\ref{cucode:section:re:emulation}. This resulted in a total of $54$ address pairs.\nThe results were then reviewed in a manual analysis step to find the correct permutation of triads for a given block.\nOnce a permutation candidate for a block is found, it can be verified by checking the correctness of additional triads. Both the process and its results are described in Section~\\ref{cucode:section:permutation}.\n\n\n\nIn combination with executing known triads directly from \\ac{ROM} and extracting their side effects, we can correlate the emulated instructions with their counterparts with known addresses.\n\n\\subsection{Microcode Emulation}\n\\label{cucode:section:re:emulation}\nIn order to gather a sufficiently large number of data points to\nreverse engineer the fine grained mapping of the \\ac{ROM} addresses,\nwe implemented a microcode emulation engine.\nThis emulation engine is designed to replicate the behavior of the \\ac{CPU} during the execution of a given triad.\nThis means that for any given input,\nthe output of both the physical \\ac{CPU} and our emulation engine should be identical.\nAs our analysis framework is implemented in Python,\nwe also chose this language to implement the emulator.\nThe emulator is designed to interpret the bitstrings extracted from\nthe \\ac{CPU} and first disassembles them using our framework.\nFor each individual micro-op, this yields the operation as well as the source and target operands.\nThe operations itself are implemented as Python lambdas modifying the indicated registers.\nThis allows for simple extension of the supported instruction list.\nFor each triad the emulator returns a changeset indicating the changed registers and their new values.\nCurrently this is done on a triad-by-triad basis to support our reverse engineering method.\nHowever, by supplying the changed register set as the input state for the next\ntriad, the emulation can be performed for any number of triads in sequence.\nThe emulation engine currently supports all of the identified arithmetic microcode operations.\nAdditionally, we supply a whitelist of instructions that produce no visible effect on the specified registers.\nWhile these instructions have side effects when executed on the \\ac{CPU},\nthey are treated as no-ops,\nbecause only the visible state of the registers is considered in our further analysis.\nThe instructions and their behavior are based on previous reverse engineering results.\nWe ensured that we correctly identified a certain instruction by executing the bitstring of the instruction\nin a microcode update applied to a real CPU and observing the effects on the specified registers with varying inputs.\n\nHowever,\nas the \\ac{ROM} contains operations that implement unknown behavior,\nmost importantly reading and writing internal status registers\nor collecting information on the currently executed instruction,\nwe were unable to accurately emulate all of the triads.\nAlso the readout itself introduced both potential bit errors as well as sections that\nare unable to be read due to dust particles or other disturbances in the raw image.\nWe thus opted to only consider triads for further analysis that (i) contain\nonly known instructions and (ii) were not part of an unreadable section.\nThis emulation yielded the behavior of triads with known physical addresses for a given input state.\nThe input state assigned a different value to every x86 and usable microcode register.\nDuring testing we observed that not all microcode registers can be freely assigned to,\nsome will trigger erratic \\ac{CPU} behavior leading to crashes or loss of control.\nThus,\nwe had to exclude certain registers from our tests.\nOur input and output state contains all six x86 general purpose registers (we excluded the stack\nmanipulation registers EBP and ESP) as well as in total 22 internal microcode registers.\n\nTo gather the behavior for known logical addresses,\nwe forced execution of each \\ac{ROM} triad directly on the \\ac{CPU}.\nFor this execution,\nwe chose the same input state that was previously used for the emulation.\nThe input state was set by a sequence of x86 instructions setting the x86 registers to the chosen values.\nThe microcode registers were then set after entering microcode by a sequence\nof micro-ops preceding the jump to the triad address to be tested.\nThe output was gathered by writing out the changed registers as specified\nby our emulator to x86 registers using microcode executed after the tested triad.\nDue to the different values for each register,\nwe could determine which register was used as an input in the tested triad as well as the operation performed on it.\nHowever,\nwe also had to exclude a large number of logical addresses as those triads lead to a\nloss of control or showed a behavior that was independent of the given input state.\nIn combination,\nthese two tests yielded a collection of address pairs consisting out of the\nphysical address of a candidate triad and the logical address of the triad.\n\n{\\captionsetup[figure]{skip=5pt}\n\\begin{figure*}[!t]\n\t\\begin{minipage}{2\\columnwidth}\n\t\\centering\n\t\\resizebox{0.80\\linewidth}{!}{\n\t\t\\includegraphics{figures\/rom_mapping.png}\n\t}\n\t\\caption{Translation of logical to physical microcode \\ac{ROM} addresses.}\n\t\\label{cucode:figure:rom:mapping}\n\t\\end{minipage}\n\\end{figure*}\n}\n\n\\subsection{Permutation Algorithms}\n\\label{cucode:section:permutation}\n\nAfter gathering the microcode address pairs, we had to reconstruct the function used to map these onto each other.\nDue to the hardware layout and hardware design possibilities, we determined a number of different candidate permutation functions.\nAdditionally, we used the data points gathered in the previous step to develop new algorithmic options.\nWe then applied these possible functions in combination to test whether they were used for a specific triad.\n\nVia this empirical testing, we found that the \\ac{ROM} uses the following permutations:\n\\begin{itemize}\n\t\\item T: table based 16 triad-wise permutation, illustrated in Table~\\ref{cucode:figure:algorithms:table}\n\t\\item R: reverse counting direction, mapping higher physical address triads to lower logical addresses\n\t\\item S: pairwise swap two consecutive triads\n\t\\item L: custom table based 16 triad-wise permutation for last block, illustrated in Table~\\ref{cucode:figure:algorithms:table}\n\\end{itemize}\n\nTo determine the combination of permutations used for a specific address pair,\nwe verified the possibilities by calculating the physical address for the given logical address.\nIf the result matches the expected value,\nthe combination is correct.\nThe found combination is then used to calculate the physical addresses for the rest of the data points.\nOnce a mismatch is found, the first approach is repeated to determine the next combination of permutations.\n\nWe determined that the mapping function is constant for 256 triads at a time,\nthen the combination of algorithms changes.\nWe also had to account for potentially swapped 256 triad blocks,\nso in case of a mismatch the remaining triad blocks in a region were then considered.\nThis yielded the mapping algorithm for all but the last 256 triads.\nThe last block uses a different mapping algorithm that was reconstructed manually.\nThe detailed mapping of all triad blocks is given in Figure~\\ref{cucode:figure:rom:mapping}; Table~\\ref{cucode:figure:algorithms:table} illustrates the permutation algorithms T and L.\n\n\\begin{table}[!htb]\n\\centering\n\\begin{tabular}{|l|l|l|}\n\\hline\nPhysical & logical - T & logical - L\\\\\n\\hline\n\\texttt{0x00} & \\texttt{0x00} & \\texttt{0x00}\\\\\n\\texttt{0x10} & \\texttt{0x20} & \\texttt{0x10}\\\\\n\\texttt{0x20} & \\texttt{0x40} & \\texttt{0x20}\\\\\n\\texttt{0x30} & \\texttt{0x60} & \\texttt{0x30}\\\\\n\\texttt{0x40} & \\texttt{0x80} & \\texttt{0x40}\\\\\n\\texttt{0x50} & \\texttt{0xA0} & \\texttt{0x50}\\\\\n\\texttt{0x60} & \\texttt{0xC0} & \\texttt{0x60}\\\\\n\\texttt{0x70} & \\texttt{0xE0} & \\texttt{0x70}\\\\\n\\texttt{0x80} & \\texttt{0x10} & \\texttt{0xF0} (RS)\\\\\n\\texttt{0x90} & \\texttt{0x30} & \\texttt{0xE0} (RS)\\\\\n\\texttt{0xA0} & \\texttt{0x50} & \\texttt{0xD0} (RS)\\\\\n\\texttt{0xB0} & \\texttt{0x70} & \\texttt{0xC0} (RS)\\\\\n\\texttt{0xC0} & \\texttt{0x90} & \\texttt{0xB0} (RS)\\\\\n\\texttt{0xD0} & \\texttt{0xB0} & \\texttt{0xA0} (RS)\\\\\n\\texttt{0xE0} & \\texttt{0xD0} & \\texttt{0x90} (RS)\\\\\n\\texttt{0xF0} & \\texttt{0xF0} & \\texttt{0x80} (RS)\\\\\n\\hline\n\\end{tabular}\n \\caption{Translation of addresses for the T and L algorithms. The L algorithm applies the R and S permutations to the higher addresses after the table based permutation.}\n \\label{cucode:figure:algorithms:table}\n\\end{table}\n\n\\section{Appendix}\n\\subsection{Hardware Details of the Microcode ROM}\n\\label{cucode:section:appendix:rom}\n\nFigure~\\ref{cucode:figure:appendix:rom} shows a \\ac{SEM} image of one of the four \\acp{ROI}.\nAs an extension to previous work by Koppe~et\\,al.\\xspace~\\cite{koppe2017reverse},\nwe further delayered the chip to analyze the region\nabove the array A2 --- the second array from the bottom.\nIts repetitive structure looked visually different compared to the other analyzed NOR-\\ac{ROM} arrays.\nA cross section and an additional delayering process revealed a prominent structure in the layer underneath, due to which we identified the area as \\ac{SRAM}.\nCompared to modern \\ac{DRAM},\n\\ac{SRAM} uses more space but can be manufactured in the same process as the adjacent NOR-\\ac{ROM}.\nAdditionally, \\ac{SRAM} does not require periodic refreshes to retain the\nstored data and is often used in microcontrollers and smaller \\acp{SOC}.\nThe usage of two different storage types in this close proximity\nis an indication of a highly optimized in-house design process.\nThe common practice is to use (third-party) \\ac{IP} cores providing a single memory type.\n\nIn the \\ac{ROM}, the microcode triads are ordered with an eight line interleaving,\nmeaning that in a linear readout the successor of a triad is found seven triads ahead.\nThis ordering was verified by searching for all-zero triads at the end of the array A2.\nAfter encountering the first all-zero triad,\nmore were found at the expected seven triad distance.\nMoreover, the hardware layout already hints at the usage of this technique.\nNote that these and other techniques used are not implemented for the sake of obfuscating the \\ac{ROM} contents,\nbut instead optimize the storage in regards to die area.\n\n\\begin{figure*}[!htb]\n\t\\begin{minipage}{\\columnwidth}\n\t\\centering\n\t\\resizebox{0.8475\\linewidth}{!}{\n\t\t\\includegraphics{figures\/ROI2_Overview_v2.png}\n\t}\n\t\\caption{\\ac{SEM} image of region R1. The middle part contains the wiring and addressing for the \\ac{ROM} and \\ac{RAM}. To reduce the average signal path length, the wiring is placed between the two memory areas.}\n\t\\label{cucode:figure:appendix:rom}\n\t\\end{minipage}\n\\end{figure*}\n\n\\subsection{\\acs{RTL} Representations of Microcode Programs}\n\nIn the following, we list the \\ac{RTL} form of our custom microcode programs described in the paper.\nThe \\ac{RTL} is the same as used by Koppe~et\\,al.\\xspace~\\cite{koppe2017reverse} and follows the x86 assembly syntax closely.\nWhere appropriate, the differences to the x86 syntax are highlighted in a comment.\nA major difference is the availability of a three operand mode.\nIn that case, the left-most operand is the destination,\nthe remaining two operands are the sources.\nMore examples can be found in our Github repository~\\cite{microcode:amd_microprograms}.\n\n{\\lstset{language=[x86masm]Assembler, deletekeywords={size}}\n\\begin{lstlisting}[\n\tcaption={Implementation of our custom \\texttt{rdtsc} variant with reduced accuracy.\nIt completely replaces the default by intercepting triad 0x318,\nthe entry point for this instruction on the K8 architecture.\nThe \\texttt{dbg} opcode that is used for the read of an internal register sets certain\nflags that are not currently supported with standard annotations in the \\ac{RTL}.\nWe omitted the check of the \\texttt{CR4.TSD} control bit,\nwhich optionally prevents access to this instruction from usermode.\nWhile we were able to partially reconstruct the check from the \\ac{ROM} readout,\nwe encountered a read error during this and cannot fully and reliably reconstruct the corresponding semantics.\nHowever, this is a limitation of the current state of reverse\nengineering and we are working on improving the readout method.},\n\tlabel=cucode:listing:rdtsc,captionpos=t\n]\n; implement default rdtsc semantics, loading TSC to edx:eax\n; emit a fixed bitstring, this instruction reads an internal register\ndbg 0001010000101111111000000011111111111111110001101010000000001011 \n; .q annotation switches to 64 bit operand size\n; srl performs a logic shift right\nsrl.q rdx, t9q, 32\nsrl.q rax, t9q, 0\n\n; load the and mask\nmov t1d, 0xffff\nsll t1d, 16\nor t1d, 0xff00\n\n; sequence word annotation, continue at the next x86 instruction\n; the following triad is still executed after this annotation\n.sw_complete\n\n; reduce accuracy of the lower 32 bit TSC\n; includes two operations as padding\nand eax, t1d\nadd t2d, 0\nadd t2d, 0\n\\end{lstlisting}\n\n\\captionof{lstlisting}{Assembly code of a test case for the \\ac{ISR}.\nThe original x86 assembly code is shown on the left. The right side is the translation performed by our transpiler.\nEach source instruction maps to a single replacement instruction.\nIn this case we used a single instruction, \\texttt{bound}, to implement all semantics, but it is also possible to repurpose multiple different x86 instructions.\nThe correct handler is selected by the lower 16 Bits of the displacement given in brackets.\nThe higher 16 Bits are used as an optional argument for the selected handler.\nIn the case of memory loads, the argument is the 16 Bit offset of the memory location to be loaded relative to a fixed base address.\nThe argument to the shift handler is the amount of bits to shift.\nThe mapping of handler number to semantics is the trivial case in this example: the handler indices are used directly.\nHowever, the full 16 Bits are available to identify handlers.\nThis also allows for using multiple different indices for the same handler, further strengthening the \\ac{ISR}.}\n\\label{cucode:listing:isr}\n\\begin{minipage}[t]{.45\\textwidth}\n \\begin{lstlisting}\nmov esi, [msg0]\nmov edi, [msg1]\nmov ecx, [rc]\n\nadd edi, ecx\nadd esi, edi\nmov edi, esi\nadd esi, esi\nshr esi, 8\nadd esi, edi\n \\end{lstlisting}\n\\end{minipage}\\hfill\n\\begin{minipage}[t]{.45\\textwidth}\n \\begin{lstlisting}\nbound esi, [eax + 0x1]\nbound edi, [eax + 0x40001]\nbound ecx, [eax + 0x180001]\n\nbound edi, [ecx + 0x4]\nbound esi, [edi + 0x4]\nbound edi, [esi + 0x0]\nbound esi, [esi + 0x4]\nbound esi, [eax + 0x80003]\nbound esi, [edi + 0x4]\n \\end{lstlisting}\n\\end{minipage}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{s:1}\nFor $d$ be a positive integer, let $X_i=\\{X_i(t),\\,t\\in \\R_+\\},\\,i=1,2,\\ldots,d$ be independent real-valued stochastic processes on the same probability space $\\ofp$. Define the \\pE{$d$-parameters} real-valued additive field (additive process)\n\\BQNY\nX(\\mathbf{t})=X(t_1,t_2,\\ldots,t_d)=\\sum_{i=1}^{d}X_i(t_i),\\quad \\mathbf{t}=(t_1,t_2,\\ldots,t_d)\\in \\R_+^d.\n\\EQNY\nThe additive process which plays a key role in studying of the general multiparameter processes, multiparameter potential theory, fractal geometry, spectral asymptotic theory has been actively investigated recently. To have a glance of these results, we refer the reader to \\cite{khoshnevisan2003measuring, chen2003small,karol2008small,khoshnevisan1999brownian,khoshnevisan2002level,khoshnevisan2004additive,khoshnevisan2009harmonic} and the references therein.\n\\newline\nOn the other hand, calculation of boundary non-crossing probabilities of Gaussian processes is a key topic both of theoretical and applied probability, see, e.g.,\n\\cite{Nov99,MR2009980,BNovikov2005,1103.60040,1079.62047,MR2576883,janssen}. Numerous applications concerned with the evaluation of boundary non-crossing probabilities relate to mathematical finance, risk theory, queuing theory, statistics, physics among many other fields. In the literature, most of contributions are only concentrate on the boundary non-crossing probabilities of Gaussian processes with one-parameter (e.g. Brownian motion, Brownian bridge and fractional Brownian motion), some important results of this field can see in \\cite{MR2028621,MR2175400,MR2016767,1137.60023,BiHa1,HMS12,MR3531495,MR3500417}. For multiparameter Gaussian processes, few cases are known about the boundary non-crossing probabilities (see, e.g., \\cite{Pillow,hashorva2014boundary,Bi2}).\n\\newline\nIn this paper, we are concentrating on the calculation of boundary non-crossing probabilities of additive Wiener field $W$ which defined by \\begin{equation}\\label{eqhm-1}\nW(\\mathbf{t})= W_1(t_1)+W_2(t_2)+\\ldots+W_d(t_d), \\quad \\mathbf{t} \\in\\R^d_+,\n\\end{equation}\nwhere $W_i=\\{W_i(t), t\\in \\R_+\\}, i=1,2,\\ldots,d$ are independent Wiener processes define on the same probability space $\\ofp$. It can be checked easily that $W$ is a Gaussian field with the convariance function given by\n\\begin{equation}\\label{eqhm-3}\n\\EE{W(\\mathbf{s})W(\\mathbf{t})}=\\sum_{i=1}^{d}s_i\\wedge t_i, \\quad\n\\mathbf{s}=(s_1,s_2,\\ldots,s_d), \\;\\mathbf{t}=(t_1,t_2,\\ldots,t_d).\n\\end{equation}\nFor two measurable functions $f,u:\\R_+^d\\rightarrow \\R$ we shall investigate the upper and lower bounds for\n$$P_f=\\pk{W(\\mathbf{t})+f(\\mathbf{t})\\leq u(\\mathbf{t}),\\;\\mathbf{t}\\in\\R_+^d}$$\nIn the following, we consider $u$ a general measurable function and $f\\neq 0$ to belong to the reproducing kernel Hilbert space (RKHS) of $W$ which is denote by $\\kHC$. A precise description of $\\kHC$ is given in section \\ref{sec:pre}, where the inner product $\\langle f,g\\rangle$ and the corresponding norm $\\|f\\|$ for $f,\\,g \\in \\kHC$ are also defined.\n\nAs in \\cite{HMS12}, a direct application of Theorem 1' in \\cite{LiKuelbs}\nshows that for any $f\\in \\kHC $ we have\n\\newcommand{\\Abs}[1]{\\Bigl\\lvert #1 \\Bigr\\rvert}\n \\BQN\\label{eq:00:2b}\n\\Abs{P_f - P_0} &\\le \\frac {1 }{\\sqrt{2 \\pi}} \\norm{ f}.\n\\EQN\nFurther, for any $g\\in\\kHC$ such that $g\\geq f$, we obtain\n\\BQN\\label{eq:WL}\n\\Phi(\\alpha - \\norm{g}) \\le P_{g}\\le P_f \\le \\Phi(\\alpha+ \\norm{f}),\n\\EQN\nwhere $\\Phi$ is the distribution of an $N(0,1)$ random variable and $\\alpha=\\Phi^{-1}(P_0) $ is a finite constant. When $f\\le 0$, then we can take always $g=0$ above which make the lower bound of \\eqref{eq:WL} useful if $\\norm{f}$ is large. When $\\norm{f}$ is small, the equation \\eqref{eq:00:2b} provides a good bound for the approximation rate of $P_f$ by $P_0$. Since the explicit formulas for computing $p_f$ seem to be impossible, the asymptotic performance of the bounds for trend functions $\\gamma f$ with $\\gamma\\to \\infty$ and $\\gamma\\to 0$ are thus worthy of consideration. This paper we shall consider the former case, and we obtain the following:\n\\newline\nIf $f(\\mathbf{t}_0)>0$ for some $\\mathbf{t}_0$ with non-negative components, then\n\\pE{for any $g\\ge f, g\\in \\kHC$ we have\n\\BQN\\label{LD1}\n\\ln P_{\\gamma f} \\ge \\ln\n\\Phi(\\alpha - \\gamma \\wF ) \\ge -(1+o(1))\\frac{\\gamma^2}{2}\\norm{g}^2, \\quad \\gamma \\to \\IF,\n\\EQN\nhence\n\\BQN\\label{LD}\n{\\ln P_{\\gamma f} \\ge -(1+o(1))\\frac{\\gamma^2}{2}\\norm{\\wF}^2, \\quad \\gamma \\to \\IF},\n\\EQN\nwhere $\\wF$ (which is unique and exists) solves the following minimization problem\n\\BQN \\label{OP}\n\\min_{ g,f \\in { \\kHC}, g \\ge f} \\norm{g}= \\norm{\\wF}>0.\n\\EQN\n\\pE{In Section} 2 we shall show that $\\underline{f}$ is the projection of $f$ on a closed convex set of $\\kHC$, and moreover\nwe show that\n\\BQN\\label{LD2}\n{\\ln P_{\\gamma f} \\sim \\ln P_{\\gamma \\underline{f}} \\sim - \\frac{\\gamma^2}{2}\\norm{\\underline{f}}^2, \\quad \\gamma \\to \\IF}.\n\\EQN\nThe rest of this paper are organized as follows: In section \\ref{sec:pre} we briefly talk about the RKHS of additive Wiener field and construct the solution of the minimization problem \\eqref{OP}. We present our main results in Section \\ref{sec:main}. The proofs of the results in this paper are shown in Section \\ref{sec:proofs}, and we conclude this paper by Appendix.\n\n\\section{Preliminaries}\\label{sec:pre}\nThis section reviews basic results of the reproducing kernel Hilbert space (RKHS), and we shall give a representation of the RKHS of additive Wiener field $W$. We shall also construct $V$ as a closed convex set of $\\kHC$, which finally enable us to prove that $\\underline{f}$ in \\eqref{OP} is the projection of $f$ on $V$. The idea of constructing $V$ comes from a similar result in one-parameter case (see e.g., \\cite{BiHa1, janssen, Pillow, 1137.60023}).\n\\newline\nIn the following of \\pE{this paper} bold letters are reserved for vectors, so we shall write for instance\n$\\mathbf{t}=(t_1, t_2, \\ldots, t_d)\\in\\R^d_+$ and $\\lambda_1$ denote the Lebesgue measures on $\\R_+$,\nwhereas $ds$ a mean integration with respect to this measure.\n\\subsection{The RKHS of additive Wiener field}\nRecall that $W_1$ is an one-parameter Wiener process. It is well-known (see e.g., \\cite{berlinet})\nthat the RKHS of the Wiener process $W_1$, denoted by $\\kHA$, is characterized as follows\n$$\\kHA=\\Bigl\\{h:\\R_+\\rightarrow \\R\\big|h(t)=\\int_{[0,t]}h'(s)ds,\\quad h'\\in L_2(\\R_+, \\lambda_1) \\Bigr\\}, $$\nwith the inner product $\\langle h,g\\rangle_1=\\int_{\\R_+}h'(s)g'(s)ds$ and the corresponding norm $\\|h\\|_1^2=\\langle h,h\\rangle$. The description of RKHS for $W_i,\\,i=2,3,\\ldots,d$ are evidently the same.\nWe now begin to construct the RKHS of additive Wiener field $W$, for any\n\\BQNY\nh_1(\\mathbf{t})&=&f_1(t_1)+f_2(t_2)+\\ldots+f_d(t_d),\\\\\nh_2(\\mathbf{t})&=&g_1(t_1)+g_2(t_2)+\\ldots+g_d(t_d),\n\\EQNY\nwhere $f_i(t_i),\\,g_i(t_i)\\in\\kHA,\\;i=1,2,\\ldots,d,$ define the inner product\n\\begin{equation}\\label{eqhm-10}\n\\langle h_1,h_2\\rangle=\\sum_{i=1}^{d}\\int_{\\R_+}f_i'(s)g_i'(s)ds.\n\\end{equation}\n\\begin{rem}\nFrom lemma \\ref{lemA1} in Appendix we have the representation $h(\\mathbf{t})=h_1(t_1)+h_2(t_2)+\\ldots+h_d(t_d)$ is unique, hence the above inner product is well defined.\n\\end{rem}\nNext, in view of lemma \\ref{lemA2} in Appendix we have the following\n\\begin{lem}\nThe RKHS for additive Wiener field $W$ is given by\n\\begin{equation}\n\\kHC=\\Bigl\\{h:\\R_+^d\\rightarrow \\R\\big|h(\\mathbf{t})=\\sum_{i=1}^{d}h_i(t_i),\\; \\text{where}\\; h_i\\in \\kHA, i=1,2,\\ldots,d \\Bigr\\}\n\\end{equation}\nequipped with the norm $\\norm{h}^2=\\langle h,h\\rangle.$\n\\end{lem}\nFor notational simplicity in the following we shall use the same notation $\\langle \\cdot,\\cdot\\rangle$ and $\\norm{\\cdot}$ to present the inner product and norm respectively, on space $\\kHA$ and $\\kHC$.\n\n\\subsection{The solution of minimization problem}\nIn this subsection, we begin to solve equation \\eqref{OP}. For any $h\\in\\kHA$, it has been shown (see \\cite{1137.60023}), that\nthe smallest concave majorant of $h$ solves\n\\BQNY\n\\min_{ g,f \\in { \\kHA}, g \\ge f} \\norm{g}= \\norm{\\wF}>0.\n\\EQNY\nMoreover, as shown in \\cite{janssen}\n the smallest concave majorant of $h$, which we denote by $\\underline{h}$,\n can be written analytically as the unique projection of $h$ on the closed convex set\n$$V_1=\\{h\\in \\kHA \\big|\\,h'(s) \\; \\text{is a non-increasing function} \\}$$\ni.e., $\\underline{h}= Pr_{V_1}h$. Here we write $Pr_{A}h$ for the projection of $h$ on some closed set $A$ also for\nother Hilbert spaces considered below. Further, if we define\n$$\\widetilde{V}_1=\\{h\\in \\kHA \\big|\\,\\langle h,f\\rE \\leq 0 \\;\\text{for any}\\; f\\in V_1\\} $$\nbe the polar cone of $V_1$. Then the following hold\n\\begin{lem}\\label{lemma 2.2}\n\\cite{hashorva2014boundary} With the above notation and definitions we have\n\\begin{itemize}\n\\item[(i)] If $h\\in V_1$, then $h\\geq 0$.\n\\item[(ii)] If $h\\in \\widetilde{V}_1$, then $h\\leq 0$\n\\item[(iii)] We have $\\langle Pr_{V_1}h, Pr_{\\widetilde{V}_1}h\\rangle=0$ and further\n\\begin{equation}\nh=Pr_{V_1}h+Pr_{\\widetilde{V}_1}h.\n\\end{equation}\n\\item[(iv)] If $h=h_1+h_2$, $h_1\\in V_1$, $h_2\\in \\widetilde{V}_1$ and $\\langle h_1, h_2\\rangle=0$, then $h_1=Pr_{V_1}h$ and $h_2=Pr_{\\widetilde{V}_1}h$.\n\\item[(v)] The unique solution of the minimization problem $\\min_{g\\geq h, g\\in \\kHA }\\norm{g}$ is $\\underline{h}=Pr_{V_1}h$.\n\\end{itemize}\n\\end{lem}\n\nSince we are going to work with functions $f$ in $\\kHC$ we need to consider the projection of such $f$ on a particular closed convex set.\nIn the following we shall write $f=f_1+f_2+\\ldots+f_d$ meaning that $f(\\vk{t})=f_1(t_1)+ f_2(t_2)+\\ldots+f_d(t_d) $ where $f_1,f_2,\\ldots,f_d \\in \\kHA$. Note in passing that this decomposition is unique for any $f\\in \\kHC$.\nDefine the closed convex set\n$$V_{2}=\\{h=h_1+h_2+\\ldots+h_d\\in \\kHC \\big| h_1,h_2,\\ldots,h_d \\in V_1\\}$$\nand let $\\widetilde{V_{2}}$ be the polar cone of $V_{2}$ given by\n$$\\widetilde{V_{2}}=\\{h \\in \\kHC \\big|\\langle h, v\\rEE \\leq 0 \\;\\text{for any}\\; v\\in V_{2}\\},$$\nwith inner product from \\eqref{eqhm-10}.\nAnalogous to Lemma \\ref{lemma 2.2} we have\n\\begin{lem}\\label{lemma 3.2}\nFor any $h=h_1+h_2+\\ldots+h_d\\in \\kHC$, we have\n\\begin{itemize}\n\\item[(i)] If $h\\in V_2$, then $h_i\\geq 0,i=1,2,\\ldots,d$.\n\\item[(ii)] If $h\\in \\widetilde{V}_2$, then $h_i\\leq 0,i=1,2,\\ldots,d$.\n\\item[(iii)] We have $\\langle Pr_{V_2}h, Pr_{\\widetilde{V}_2}h\\rangle=0$ and further\n\\begin{equation}\\label{eqhm-6}\nh=Pr_{V_2}h+Pr_{\\widetilde{V}_2}h.\n\\end{equation}\n\\item[(iv)] If $h=h_1+h_2$, $h_1\\in V_2$, $h_2\\in \\widetilde{V}_2$ and $\\langle h_1, h_2\\rangle=0$, then $h_1=Pr_{V_2}h$ and $h_2=Pr_{\\widetilde{V}_2}h$.\n\\item[(v)] The unique solution of the minimization problem $\\min_{g\\geq h, g\\in \\kHC }\\norm{g}$ is\n\\begin{equation}\\label{eqhm-7}\n\\underline{h}=Pr_{V_2}h=Pr_{V_1}h_1+ Pr_{V_1}h_2+\\ldots+Pr_{V_1}h_d.\n\\end{equation}\n\\end{itemize}\n\\end{lem}\n\n\\section{Main Result}\\label{sec:main}\nConsider two measurable d-parameter functions $f,u:\\R_+^d\\rightarrow \\R$. Suppose that $f(\\mathbf{0})=0$ and $f\\in \\kHC$.\nHence we can write\n$$f(\\mathbf{t})=\\sum_{i=1}^{d}f_i(t_i),\\quad f_i(t_i)\\in \\kHA,\\,i=1,2,\\ldots,d$$\nwe also suppose $f_i(0)=0,i=1,2,\\ldots,d$ in the above decomposition. Recall their representations $f_i(t_i)=\\int_{[0,t_i]}f'_i(s)ds,\\quad f'_i\\in L_2(\\R_+, \\lambda_1), i=1,2,\\ldots,d.$ We shall estimate the boundary non-crossing probability\n\\BQNY\n P_f=\\pk{W(\\mathbf{t})+f(\\mathbf{t})\\leq u(\\mathbf{t}),\\;\\mathbf{t}\\in\\R_+^d}.\n \\EQNY\n In the following we set $\\underline{f_i}= Pr_{V_1}f_i,i=1,2,\\ldots,d$ and $\\underline{f}= Pr_{V_2}f$.\nWe state next our main result:\n\n\\begin{thm} \\label{Thn1} Let the following conditions hold:\n\\BQN\\label{conA1}\\lim_{t_i \\rightarrow\\infty}u(0,\\ldots,t_i,0,\\ldots,0)\\underline{f_{i}'}(t_i)=0,\\;i=1,2,\\ldots,d.\n\\EQN\nThen we have\n\\begin{equation*}\\begin{gathered}\nP_f\\leq P_{ f-\\underline{f} }\n\\exp\\biggl (- \\sum_{i=1}^{d}\\int_{\\R_+}u(0,\\ldots,t_i,0,\\ldots,0)d \\underline{f_{i}'}(t_i)\n-\\frac12\\|\\underline{f}\\nOO ^2\\biggr).\n\\end{gathered}\n\\end{equation*}\n\\end{thm}\n\n\\begin{rem} Note that $f$ starts from zero therefore $f$ can not be a constant unless $f\\equiv 0$ but this case is trivial.\n\\end{rem}\n\\begin{rem} Conditions \\eqref{conA1} of the theorem means that asymptotically the components of shifts and their derivatives are negligible in comparison with function $u$.\n\\end{rem}\nUsing Theorem \\ref{Thn1}, we can obtain an asymptotically property of $P_{\\gamma f}$, in fact, if $u(\\mathbf{t})$ is bounded above, then we have the following result\n\\BK\\label{korr}\nIf $f\\in\\kHC$ is such that $f(\\mathbf{t}_0)$ for some $\\mathbf{t}_0$, then\n\\BQNY\n{\\ln P_{\\gamma f} \\sim \\ln P_{\\gamma \\underline{f}} \\sim - \\frac{\\gamma^2}{2}\\norm{\\underline{f}}^2, \\quad \\gamma \\to \\IF}.\n\\EQNY\n\\EK\n\n\\section{Proofs}\\label{sec:proofs}\n\\prooflem{lemma 2.2} For $h\\in V_1$, we have $h'$\n is non-increasing therefore \\pE{$h'$ } is non-negative. Since $h(0)=0,$ thus $h(u)\\geq 0$ for all $u$. The proof of statements (ii) to (v) can see in\\cite{hashorva2014boundary}, we do not repeat the proof here.\n\\QED\n\n\\prooflem{lemma 3.2}\n(i) If $h\\in V_2$, from the definition of $V_2$, we obtain $h_1,h_2,\\ldots,h_d \\in V_1$. Thus $h_i\\geq 0,i=1,2,\\ldots,d$ follow directly from (i) in Lemma \\ref{lemma 2.2}\n\\newline\n(ii) If $h(\\mathbf{t})=h_1(t_1)+h_2(t_2)+\\ldots+h_d(t_d)\\in\\widetilde{V}_2,$ then $h_i(t_i)\\in\\kHC.$ For any $f_i(t_i)\\in V_1,$ let\n\\BQNY\nv(\\mathbf{t})=f_i(t_i)\\in V_2.\n\\EQNY From the definition of $\\widetilde{V}_2$, we obtain\n$$\\langle h,v\\rangle=\\langle h_i,f_i\\rangle\\leq 0.$$\nTherefore, $h_i\\in\\widetilde{V}_1,$ and the results follow from (ii) in lemma \\ref{lemma 2.2}.\n\\newline\nThe proof of statements $(iii)$ and $(iv)$ are similar to $(iii)$ and $(iv)$ in Lemma \\ref{lemma 2.2}, and can obtain immediately from \\cite{janssen}.\n\\newline\n(v) For any $h(\\mathbf{t})\\in\\kHC,$ let $g(\\mathbf{t})\\in\\kHC$ such that $g\\geq h$, we then have $g_i\\geq h_i,i=1,2,\\ldots,d,$ where\n\\BQNY\nh&=&h_1+h_2+\\ldots+h_d,\\\\\ng&=&g_1+g_2+\\ldots+g_d.\n\\EQNY\nThe minimization problem\n\\BQNY\n\\min_{g\\geq h, g\\in \\kHC }\\norm{g}&=&\\min_{g\\geq h, g\\in \\kHC}(\\norm{g_1}+\\norm{g_2}+\\ldots+\\norm{g_d})\\\\\n&=&\\sum_{i=1}^{d}\\min_{g_i\\geq h_i,g_i\\in\\kHA}\\norm{g_i}\\\\\n&=&\\norm{\\underline{h_1}}+\\norm{\\underline{h_2}}+\\ldots+\\norm{\\underline{h_d}}.\n\\EQNY\nThe equalizes above hold if and only if\n\\begin{equation}\n\\underline{h}=Pr_{V_2}h=Pr_{V_1}h_1+ Pr_{V_1}h_2+\\ldots+Pr_{V_1}h_d.\n\\end{equation}\nCompleting the proof.\n\\QED\n\n\\prooftheo{Thn1} Denote by $\\widetilde{P}$ a probability measure that is defined via its Radon-Nikodym derivative\n\\begin{equation*}\\begin{gathered} \\frac{dP}{d\\widetilde{P}}=\\prod_{i=1}^{d}\\exp\\Big(-\\frac12\\|f_i\\nO ^2+\\int_{\\R_+}f_i'(t_i)dW_i^0(t_i)\\Big).\n\\end{gathered}\n\\end{equation*}\nAccording to Cameron-Martin-Girsanov theorem, $W_i^0(t)=W_i(t) +\\int_{[0,t]}f_i'(s)ds,\\;i=1,2,\\ldots,d$ are \\pE{independent} Wiener processes. Denote\n$1_u\\{X\\}=1\\{X(\\mathbf{t})\\leq u(\\mathbf{t}),\\;\\mathbf{t}\\in\\R_+^d\\}$ and $$W^0(\\mathbf{t})=W_1^0(t_1)+ W_2^0(t_2)+\\ldots+W_d^0(t_d).$$\nNote that $ \\norm{f}^2= \\norm{f_1}^2+\\norm{f_2}^2+\\ldots+\\norm{f_d}^2$, \\pE{hence\nusing further} \\eqref{eqhm-6} and \\eqref{eqhm-7} we obtain\n\\BQNY\n\\lefteqn{P_f}\\\\\n &=&\\EE{ 1_u\\Big(\\sum_{i=1}^{d}(W_i(t_i)+f_i(t_i))\\Big)}\\\\\n&=&\\mathbb{E}_{\\widetilde{P}}\\Biggl( \\frac{dP}{d\\widetilde{P}}1_u\\Big(W^0(\\mathbf{t})\\Big)\\Biggr)\\\\\n&=&\n\\exp\\Big(-\\frac12 \\norm{f}^2 \\Big) \\EE{ \\exp\\Big(\\sum_{i=1}^{d}\\int_{\\R_+}f_i'(t_i)dW_i^0(t_i)\n\\Big)1_u\\Big(W^0(\\mathbf{t})\\Big)}\\\\\n&=&\\exp\\Big(-\\frac12 \\norm{\\underline{f}}^2 \\Big)\\\\\n&&\\times \\mathbb{E} \\Biggl\\{\\prod_{i=1}^{d}\\exp\\Big(-\\frac12\\|Pr_{\\widetilde{V}_1}{f_i} \\nO ^2\n+\\int_{\\R_+} Pr_{\\widetilde{V}_1}f_i'(t_i)dW_i^0(t_i)\\Big)\n\\times \\exp\\Big(\\sum_{i=1}^{d}\\int_{\\R_+ } \\underline{f_i}'(t_i)dW_i^0( {t_i})\\Big)1_u\\Big(W^0(\\mathbf{t})\\Big)\\Biggr\\}.\n\\EQNY\nIn order to re-write $\\int_{\\R_+ } \\underline{f_1}'(t_1)dW_1^0( {t_1})$, we mention that in this integral $dW_1^0(t_1)=d_1(W^0(t_1,0,\\ldots,0))$, therefore on the indicator $1_u\\{\\sum_{i=1}^{d} W_i^0(t_i)\\}=1_u\\{W^0(\\mathbf{t})\\}$\nunder conditions of the theorem and using lemma \\ref{lemA3} in the Appendix we have the relations\n\\begin{equation}\\begin{gathered}\\label{eq3.2n}\n\\int_{\\R_+}\\underline{f_1}' (t_1)dW_1^0( {t_1})=\\lim_{n\\rightarrow \\infty}\\int_{[0,n]}\\underline{f_1}' (t_1)dW_1^0( {t_1})\\\\\n=\\lim_{n\\rightarrow \\infty}\\Big(\\underline{f_1}' (n)W^0(n,0,\\ldots,0)+\\int_{[0,n]}W^0(t_1,0,\\ldots,0)d(-\\underline{f_1}' )(t_1)\\Big).\n\\end{gathered}\n\\end{equation}\nSimilarly, for any $i=2,3,\\ldots,d$ we have\n\\begin{equation}\\label{eq3.3n}\\int_{\\R_+ }\\underline{f_i}'(t_i)dW_i^0(t_i)\n=\\lim_{n\\rightarrow \\infty}\\Big(\\underline{f_i}' (n)W^0(0,\\ldots,n,0,\\ldots,0)+\\int_{[0,n]}W^0(0,\\ldots,t_i,0,\\ldots,0)d(-\\underline{f_i}' )(t_i)\\Big).\n\\end{equation}\n\nCombining \\eqref{eq3.2n}--\\eqref{eq3.3n} and using conditions \\eqref{conA1}, we get that on the same indicator\n\n\\begin{equation}\n\\begin{gathered}\\label{3.6n}\\sum_{i=1}^{d}\\int_{\\R_+ } \\underline{f_i}'(t_i)dW_i^0( {t_i})\\leq \\lim_{n\\rightarrow \\infty}\\Big(\\sum_{i=1}^{d}\\underline{f_i}' (n)W^0(0,\\ldots,n,0,\\ldots,0)+\\sum_{i=1}^{d}\\int_{[0,n]}W^0(0,\\ldots,t_i,0,\\ldots,0)d(-\\underline{f_i}' )(t_i)\\Big)\\\\\n\\leq - \\sum_{i=1}^{d}\\int_{\\R_+}u(0,\\ldots,t_i,0,\\ldots,0)d \\underline{f_{i}'}(t_i).\n\\end{gathered}\n\\end{equation}\nOn the other hand, we have\n\\BQN\\label{3.7n}\nP_{f-\\underline{f}}&=&\\mathbb{E} \\Biggl\\{\\prod_{i=1}^{d}\\exp\\Big(-\\frac12\\|f-\\underline{f} \\nO ^2\n+\\int_{\\R_+} (f-\\underline{f})'dW_i^0(t)\\Big)\n1_u\\Big(W^0(\\mathbf{t})\\Big)\\Biggr\\}\\\\\n\\nonumber&=&\\mathbb{E} \\Biggl\\{\\prod_{i=1}^{d}\\exp\\Big(-\\frac12\\|Pr_{\\widetilde{V}_1}{f_i} \\nO ^2\n+\\int_{\\R_+} Pr_{\\widetilde{V}_1}f_i'(t)dW_i^0(t)\\Big)\n1_u\\Big(W^0(\\mathbf{t})\\Big)\\Biggr\\}.\n\\EQN\nFrom \\eqref{3.6n} and \\eqref{3.7n}, we conclude that\n\\begin{equation*}\\begin{gathered}\nP_f\\leq P_{ f-\\underline{f} }\n\\exp\\biggl (- \\sum_{i=1}^{d}\\int_{\\R_+}u(0,\\ldots,t_i,0,\\ldots,0)d \\underline{f_{i}'}(t_i)\n-\\frac12\\|\\underline{f}\\nOO ^2\\biggr).\n\\end{gathered}\n\\end{equation*}\n\\QED\n\n\\proofkorr{korr} From \\eqref{LD1} we obtain\n\\BQNY\n\\ln P_{\\gamma f} \\geq -(1+o(1))\\inf_{g\\geq f}\\frac{\\gamma^2}{2}\\norm{g}^2=-(1+o(1))\\frac{\\gamma^2}{2}\\norm{\\underline{f}}^2, \\quad \\gamma \\to \\IF.\n\\EQNY\nOn the other hand, from theorem \\ref{Thn1} we obtain\n\\BQNY\nP_{\\gamma f} \\leq P_{\\gamma(f-\\underline{f})}\\exp(-(1+o(1))\\frac{\\gamma^2}{2}\\norm{\\underline{f}}^2).\n\\EQNY\nSince $f(\\mathbf{t}_0)>0$, then $\\lim_{\\gamma\\to\\infty}P_{\\gamma(f-\\underline{f})}=constant>0$. Hence as $\\gamma\\to\\infty,$\n\\BQNY\n\\ln P_{\\gamma f} \\leq -(1+o(1))\\frac{\\gamma^2}{2}\\norm{\\underline{f}}^2,\n\\EQNY\nand the claim follows.\n\\QED\n\n\\section {Appendix}\\label{sec:appendix}\n\\begin{lem}\\label{lemA1} If the function $h:\\R_+^d\\rightarrow \\R$ admits the representation\n\\begin{equation}\\label{unique}\n h(\\mathbf{t})=h_1(t_1)+h_2(t_2)+\\ldots+h_d(t_d),\n\\end{equation}\n where $h_i\\in \\kHA, i=1,\\ldots,d, $ then \\pE{the} representation \\eqref{unique} is unique.\n\\end{lem}\n\\begin{proof}\nIf the function $h:\\R_+^d\\rightarrow \\R$ admits the representation\n\\begin{equation}\\label{eqhm-2}\n h(\\mathbf{t})=\\sum_{i=1}^{d}f_i(t_i)=\\sum_{i=1}^{d}g_i(t_i),\n\\end{equation}\n where $f_i,g_i\\in \\kHA, i=1,2,\\ldots,d.$\nFor any $i=1,2,\\ldots,d,$ we put $t_j=0$ for $j\\neq i$, and note that $f_j(0)=g_j(0)=0,$ then we obtain $f_i=g_i,i=1,2,\\ldots,d.$ Hence the representation \\eqref{unique} is unique.\n\\end{proof}\nNoting that the convariance function of $W_i$ is $s_i\\wedge t_i$, and the convariance function of processes $W(\\mathbf{t})=W_1(t_1)+W_2(t_2)+\\ldots+W_d(t_d)$ is given by\n\\begin{equation*}\nR(\\mathbf{s},\\mathbf{t}):=\\EE{W(\\mathbf{s})W(\\mathbf{t})}=\\sum_{i=1}^{d}s_i\\wedge t_i, \\quad\n\\mathbf{s}=(s_1,s_2,\\ldots,s_d), \\;\\mathbf{t}=(t_1,t_2,\\ldots,t_d).\n\\end{equation*}\nNext, we will identify the RKHS corresponding to a sum of $d$ covariances.\nSuppose now $R_i,i=1,2,\\ldots,d$ are $d$ covariances of Gaussian processes, the corresponding RKHS are\n$\\mathbb{K}_i,i=1,2,\\ldots,d.$ We suppose also $\\norm{\\cdot}_i$ the inner product of RKHS $\\mathbb{K}_i,i=1,2,\\ldots,d.$\nThe following is a well-known lemma and we refer the reader to \\cite{aronszajn1950theory}\nfor its proof.\n\\begin{lem}\\label{lemA2}\nThe RHKS of Gaussian processes which with covariances $R=R_1+R_2+\\ldots+R_d$ is then given by the Hilbert space $\\mathbb{K}$ consists of all functions $f(\\mathbf(t))=f_1(t_1)+f_2(t_2)+\\ldots+f_d(t_d),$ with $f_i(t_i)\\in\\mathbb{K}_i,i=1,2,\\ldots,d,$ and the norm is given by\n\\BQNY\n\\norm{f}=\\inf(\\norm{f_1}_1+\\norm{f_2}_2+\\ldots+\\norm{f_d}_d),\n\\EQNY\nwhere the infimum taken for all the decomposition $f(\\mathbf{t})=f_1(t_1)+f_2(t_2)+\\ldots+f_d(t_d),\\;g(\\mathbf{t})=g_1(t_1)+g_2(t_2)+\\ldots+g_d(t_d)$ with $f_i(t_i),\\;g_i(t_i)\\in\\mathbb{K}_i,i=1,2,\\ldots,d.$ Furthermore, if for any $f\\in \\mathbb{K}$, the decomposition $f(\\mathbf{t})=f_1(t_1)+f_2(t_2)+\\ldots+f_d(t_d)$ is unique, then the inner product of $\\mathbb{K}$ is\n\\BQNY\n\\langle f,g \\rangle=\\langle f_1,g_1 \\rangle+\\langle f_2,g_2 \\rangle+\\ldots+\\langle f_d,g_d \\rangle.\n\\EQNY\n\\end{lem}\nAlso if we define the plus $\\oplus$ among $\\mathbb{K}_i,i=1,2,\\ldots,d$ by $\\mathbb{K}_i\\oplus\\mathbb{K}_j:=\\{f=f_i+f_j\\mid f_i\\in\\mathbb{K}_i,f_j\\in\\mathbb{K}_j\\}$, then we can rewritten $\\mathbb{K}$ as\n\\BQNY\n\\mathbb{K}=\\mathbb{K}_1\\oplus\\mathbb{K}_2\\oplus\\ldots\\oplus\\mathbb{K}_d.\n\\EQNY\nLet $W_1$ be a Wiener process, $h:\\R_+\\rightarrow \\R$ be an integrable function, we can extend the integration of $h$ w.r.t $W_1$ on $\\R_+$ by the following sence\n\\BQN\\label{eq-integral}\n\\int_{\\R_+}h(s)dW_1(s)=L_2-\\lim_{n\\rightarrow \\infty}\\int_{[0,n]}h (s)dW_1(s)\n\\EQN\nwhenever this limit exists. Furthermore, for any $h\\in V_1,$ the derivative $h'\\in L_2(\\R_+, \\lambda_1)$ is non-increasing, therefore $\\int_{[0,n]}h'^2 (s)ds\\leq h'^2 (0)n$ which implies that the integral $\\int_{[0,n]}h (s)dW_1(s)$ is correctly defined as It\\^o integral. We then can construct the integration-by-parts formula\n\\begin{lem}\\label{lemA3}\nLet $h\\in V_1,$ and $W_1$ be a Wiener process. Then for any $T<\\infty$, we have the following:\n\\BQN\\label{eq-bypart}\n\\int_{[0,T]}h (s)dW_1(s)=\\int_{[0,T]}W_1(s)d(-h (s))+h (T)W_1(T),\n\\EQN\nwhere the integral in the right-hand side of \\eqref{eq-bypart} is a Riemann-Stieltjes integral.\n\\end{lem}\n\\begin{proof}\nFrom \\cite{karatzas2012brownian}, for any partition $\\pi$ of interval $[0,T],$ we obtain that the integral $\\int_{[0,T]}h (s)dW_1(s)$ coincide with the limits in probability of integral sums\n\\BQNY\n\\int_{[0,T]}h (s)dW_1(s)&=&L_2-\\lim_{\\abs{\\pi}\\rightarrow 0}\\sum_{i=1}^{N}h (s_{i-1})(W_1(s_i)-W_1(s_{i-1}))\\\\\n&=&L_2-\\lim_{\\abs{\\pi}\\rightarrow 0}\\sum_{i=1}^{N}W_1(s_i)(h (s_{i-1})-h (s_{i}))+W_1(T)h(T)\\\\\n&=&\\int_{[0,T]}W_1(s)d(-h (s))+W_1(T)h(T).\n\\EQNY\n\\end{proof}\n\n\n{\\bf Acknowledgment}: This work was partly financed by the project NSFC No.71573143 and SNSF Grant 200021-166274.\n\n\\bibliographystyle{ieeetr}\n\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nEntanglement is one of the most interesting features of quantum mechanics that cannot be explained by any local classical theory. This notion plays a key role in quantum information and quantum computation sciences (see e.g. \\cite{Nielsen2010}). As soon as entanglement was recognized as a resource for quantum information processing it was considered very fragile and easily degradable by environmental noise. So the idea to avoid as much as possible interactions with environment was dominant. However, recently it has been put forward the idea that environment can after all have a positive role \\cite{VWC09}. For instance, when environment acts globally on a composite system it can supply a kind of interaction that helps in establishing entanglement.\nAmong environmental effects, dissipation plays a prominent role because it allows for the stabilization of targeted resources, like entanglement, a fact that may result as a key advantage over unitary (noiseless) manipulation \\cite{Kraus2008}.\nThen, it has been shown, both theoretically \\cite{Plenio2002} and experimentally \\cite{Krauter2011}, that a global dissipative environment can establish stationary entanglement.\nSurprisingly, this happens even without any direct interaction among subsystems \\cite{Ghosh2006}. The simplest model where such an effect occurs is that of two qubits dissipating into a common environment. A possibility that has been proved true for systems composed by more than two subsystems \\cite{Memarzadeh2013}.\n\nAfter having ascertained the benefits of global environment's action on entanglement, \none is naturally led to ask what would happen if beside there are also local environments actions.\nWould entanglement be generated as well and persist indefinitely? \nIf yes, to what amount and for which initial states?\nHere, we shall address these issues by considering\na model of two qubits dissipating into a ``glocal\" environment (at non-zero temperatures). \nBy ``glocal\" we mean a mixture of global environment (with which the two qubits jointly interact) and local environments (with which each qubit separately interacts).\n\nWe shall then determine conditions under which stationary entanglement can be induced. It results on the one hand (and rather counterintuitively) that entanglement in presence of local environments is achievable when these are at nonzero temperature.\nOn the other hand, while global environment is vital for indirect qubits interaction, it should be ideally at zero temperature.\n\nThe results are obtained by first studying the dynamical map (focusing on the stationary regime) in terms of its Kraus operators \\cite{Kraus83} and then by characterizing it through an entangling power measure. This latter relies on the statistical average over the initial states establishing an input-independent dynamics of entanglement \\cite{Zanardi2000} (a concept that has already been applied in many quantum systems \\cite{Lakshminarayan2001}).\n\nThe paper is organized as follow. In Sec. \\ref{sec:model} we introduce our model with two qubits and thermal environments. In Sec. \\ref{sec:Kraus} we find the Kraus operators corresponding to the dynamical map focusing on the stationary regime. Sec. \\ref{sec:entpower} deals with the entangling power of the map. Finally, we draw our conclusions in Sec. \\ref{sec:conclusion}.\n\n\n\n\n\\section{The Model}\\label{sec:model}\n\nDissipation of energy into environment is an important phenomenon in a variety of open quantum systems. Quite generally, the environment should be treated as a distribution of the uncorrelated thermal equilibrium mixture of states. For two qubits dissipating into their own thermal environments, the description of the dynamics stems on a master equation of Lindblad form \\cite{Breuer2002}, with Lindblad operators proportional to $\\sigma_1$, $\\sigma_2$, $\\sigma_1^{\\dagger}$ and $\\sigma_2^{\\dagger}$ respectively, where $\\sigma_i:=| g_i\\rangle\\langle e_i|$, being $|g_i\\rangle$, $|e_i\\rangle$ the ground and the excited state respectively of the $i$th qubit $(i=1,2)$.\nThis dynamics constitutes the local dissipation.\nHere, driven by the fact that the continuous miniaturization of physical devices makes qubits closely spaced, we are going to also consider global dissipation, namely the two qubits dissipating into a common thermal environment (see Fig. \\ref{fig:sys}). \nThis amounts to consider additional Lindblad operators proportional \nto $\\sigma_1 + \\sigma_2$ and $\\sigma_1^{\\dagger}+\\sigma_2^{\\dagger}$. \n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{fig1.pdf}\n\\fcaption{\\label{fig:sys} Pictorial representation of the system under study.}\n\\end{figure}\nThus the dynamics of the density operator $\\rho$ of the system under study will be governed by the following master equation\n\\begin{equation}\n\\dot{\\rho}(t)=\\gamma \\sum_{k=1}^2 \\left[2L_k\\rho L_k^\\dag\n-L_k^\\dag L_k \\rho -\\rho L_k^\\dag L_k\\right]+(1-\\gamma) \\sum_{k=3}^6 \\left[2L_k\\rho L_k^\\dag\n-L_k^\\dag L_k \\rho -\\rho L_k^\\dag L_k\\right],\n\\label{eq:newsys}\n\\end{equation}\nwhere\n\\begin{equation}\n\\begin{aligned}\nL_1&=\\sqrt{\\bar{n}_g+1} \\;(\\sigma_1+\\sigma_2),\\\\\nL_2&=\\sqrt{\\bar{n}_g} \\;(\\sigma_1^\\dag+\\sigma_2^\\dag),\\\\\nL_3&=\\sqrt{\\bar{n}_l+1}\\; \\sigma_1,\\\\\nL_4&=\\sqrt{\\bar{n}_l+1}\\;\\sigma_2,\\\\\nL_5&=\\sqrt{\\bar{n}_l}\\;\\sigma_1^\\dag,\\\\\nL_6&=\\sqrt{\\bar{n}_l}\\;\\sigma_2^\\dag,\n\\end{aligned}\n\\label{eq:Lin}\n\\end{equation}\nwith $\\bar{n}_g$ and $\\bar{n}_l$ the number of thermal excitations in the global and local environments. \nIn Eq.\\eqref{eq:newsys} the parameter $\\gamma\\in[0,1]$ describes the interplay between purely local dissipation ($\\gamma=0$) and purely global dissipation ($\\gamma=1$). We have assumed a unit decay rate. \n\nIn case $\\bar{n}_l=\\bar{n}_g=0$, since in\nEq.\\eqref{eq:newsys} there is no Hamiltonian term and the Lindblad operators all commute, we can just model the process as a ``weakly measure and prepare\" channel. The local dissipation just asks each qubit whether they are excited and give them some chance of decaying in the ground state. It can be represented by a Markov link \n$|e_k\\rangle\\to|g_k\\rangle$ $(k=1,2)$. The global dissipation just asks the two qubits \nfor the presence of one or two excitations in the symmetric subspace, which then decay with a fixed rate.\n It leaves $\\frac{1}{\\sqrt{2}}\\left(\\ket{e_1}\\ket{g_2}-\\ket{g_1}\\ket{e_2}\\right )$ fixed and it can be represented by a Markov link \n$\\ket{e_1}\\ket{e_2}\\to\\frac{1}{\\sqrt{2}}\\left(\\ket{e_1}\\ket{g_2}+\\ket{g_1}\\ket{e_2}\\right )\\to\\ket{g_1}\\ket{g_2}$. In case of nonzero $\\bar{n}_l$ and or $\\bar{n}_g$, the dynamics results much more involved. \n\nTo study it we formally expand the density operator in the basis $\\{\\ket{1}:=\\ket{e_1}\\ket{e_2},\\ket{2}:=\\ket{e_1}\\ket{g_2},\\ket{3}:=\\ket{g_1}\\ket{e_2},\\ket{4}:=\\ket{g_1}\\ket{g_2}\\}$ so to have\n\\begin{equation}\n\\rho(t)= \\sum_{j,k=1}^{4}\\rho_{jk}(t)\\ket{j}\\bra{k},\n\\label{eq:timeden}\n\\end{equation}\nwhere $\\rho_{jk}(t)$ are unknown time-dependent coefficients.\nFor the sake of simplicity we will define\n$\\rho_{jk}\\equiv \\rho_{jk}(0)$.\n\nUpon insertion of \\eqref{eq:timeden} into \\eqref{eq:newsys} \nthe dynamics will be described by a set of linear differential equations for the unknown coefficients\n$\\rho_{jk}(t)$ that can be compactly expressed as\n\\begin{equation}\n\\dot{\\text{\\bf{v}}}(t)=M\\text{\\bf{v}}(t),\n\\label{eq:differ}\n\\end{equation}\nwhere $\\text{\\bf{v}}(t)=\\left( \\rho_{11}(t),\\rho_{12}(t),\\cdots,\\rho_{43}(t),\\rho_{44}(t)\\right) ^{\\text{T}}$ and $M$ is a $16\\times 16$ matrix of constant coefficients given by\n\\begin{equation}\nM= \\begin{pmatrix}\n M_{11} & M_{12} \\\\\n M_{21} & M_{22}\n \\end{pmatrix},\n\\end{equation}\nwith\n\\begin{equation}\nM_{11}:= \\left(\n \\begin{matrix}\n -4 & 0 & 0 & 0 & 0 & 2\\xi & \\eta & 0\\\\ \n 0 & -3 & \\zeta & 0 & 0 & 0 & 0 & \\eta \\\\ \n 0 & \\zeta & -3 & 0 & 0 & 0 & 0 & 2\\xi \\\\ \n 0 & 0 & 0 & -2 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & -3 & 0 & 0 & 0 \\\\\n 2(1+\\xi) & 0 & 0 & 0 & 0 & -2 & \\zeta & 0 \\\\\n \\chi& 0 & 0 & 0 & 0 & \\zeta & -2 & 0 \\\\\n 0 & \\chi& 2(1+\\xi) & 0 & 0 & 0 & 0 & -1 \n \\end{matrix}\\right)-4\\xi\\, I_{8\\times 8},\n\\end{equation}\n\n\\begin{equation}\nM_{12}:= \\left(\n \\begin{matrix}\n 0 & \\eta & 2\\xi & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 2\\xi & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & \\eta & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n \\zeta & 0 & 0 & 0 & 0 & \\eta & 2\\xi & 0 \\\\\n 0 & \\zeta & 0 & 0 & 0 & 0 & 0 & 2\\xi \\\\\n 0 & 0 & \\zeta & 0 & 0 & 0 & 0 & \\eta \\\\\n 0 & 0 & 0 & \\zeta & 0 & 0 & 0 & 0 \n \\end{matrix}\\right),\n\\end{equation}\n\n\\begin{equation}\nM_{21}:= \\left(\n \\begin{matrix}\n 0 & 0 & 0 & 0 & \\zeta & 0 & 0 & 0 \\\\\n \\chi& 0 & 0 & 0 & 0 & \\zeta & 0 & 0 \\\\\n 2(1+\\xi) & 0 & 0 & 0 & 0 & 0 & \\zeta & 0 \\\\\n 0 & 2(1+\\xi) & \\chi& 0 & 0 & 0 & 0 & \\zeta \\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & \\chi& 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 2(1+\\xi) & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & 2(1+\\xi) & \\chi& 0 \n \\end{matrix}\\right),\n\\end{equation}\n\n\n\\begin{equation}\nM_{22}:= \\left(\n \\begin{matrix}\n -3 & 0 & 0 & 0 & 0 & 2\\xi & \\eta & 0 \\\\\n 0 & -2 & \\zeta & 0 & 0 & 0 & 0 & \\eta \\\\\n 0 & \\zeta & -2 & 0 & 0 & 0 & 0 & 2\\xi \\\\\n 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & -2 & 0 & 0 & 0 \\\\\n 2(1+\\xi) & 0 & 0 & 0 & 0 & -1 & \\zeta & 0 \\\\\n \\chi& 0 & 0 & 0 & 0 & \\zeta & -1 & 0 \\\\\n 0 & \\chi& 2(1+\\xi) & 0 & 0 & 0 & 0 & 0\n \\end{matrix}\\right)-4\\xi\\, I_{8\\times 8},\n\\end{equation}\nand \n\\begin{equation}\n\\begin{aligned}\n\\xi&:=\\gamma\\bar{n}_g+(1-\\gamma)\\bar{n}_l,\\\\ \n\\eta&:=2\\gamma\\bar{n}_g,\\\\\n\\chi&:=2\\gamma(1+\\bar{n}_g),\\\\\n\\zeta&:=-\\gamma(1+2\\bar{n}_g).\n\\end{aligned}\n\\end{equation}\n\n\n\n\\section{Steady states and dynamical map} \\label{sec:Kraus}\n\nWe are interested in the stationary solutions of Eq.\\eqref{eq:differ}, i.e. in $\\text{\\bf{v}}(t=\\infty)$.\nWe may notice that for $\\gamma<1$ or $\\bar{n}_g>0$ the steady state can be simply found by solving\n$M\\text{\\bf{v}}(t=\\infty)=0$ as $\\ker M$ results one dimensional.\nIn contrast for $\\gamma=1$ and $\\bar{n}_g=0$, $\\ker M$ results of dimension greater than one, meaning that the steady state is not unique and will depend on the initial state.\nHence it must be derived by first solving Eq.\\eqref{eq:differ} and then taking $\\lim_{t\\to\\infty}\\text{\\bf{v}}(t)$\n(see Appendix \\ref{App:differ}).\nThis different behavior should be ascribed to the fact that in Eq.\\eqref{eq:newsys} when $\\gamma=1$ and $\\bar{n}_g=0$ there exist non-trivial operators (i.e. not multiple of the identity) commuting with the Lindblad operators \\cite{Spohn}.\n\nTaking into account both cases, the stationary density operator can be expressed (in the basis $\\{|1\\rangle,\n|2\\rangle, |3\\rangle, |4\\rangle\\}$) as:\n\\begin{equation}\n\\rho(\\infty)=\\begin{pmatrix}\n B_1 & 0 & 0 & 0 \\\\\n 0 & B_2+R_1 & D-R_1 & R_2 \\\\\n 0 & D-R_1 & B_3+R_1 & -R_2 \\\\\n 0 & R_2^{*} & -R_2^{*} & B_4+R_3\n \\end{pmatrix},\n \\label{eq:stationarystate}\n\\end{equation}\nwhere\n\\begin{equation}\n\\begin{aligned}\nB_1&:=\\frac{1}{H}\\left(1-\\delta_{\\gamma,1}\\delta_{\\bar{n}_g,0}\\right)\n\\Big[ 2 \\gamma ^2 \\bar{n}_g^2-(\\gamma -1) \\bar{n}_l^2 (6 \\gamma \n \\bar{n}_g+1)+2 \\gamma \\bar{n}_g \\bar{n}_l \\Big(\\gamma (2 \\bar{n}_g-1)+1\\Big)+2\n (\\gamma -1)^2 \\bar{n}_l^3\\Big], \\\\\nB_2&:=\\frac{1}{H}\\left(1-\\delta_{\\gamma,1}\\delta_{\\bar{n}_g,0}\\right)\n\\Big(2 \\gamma \\bar{n}_g-2 (\\gamma -1) \\bar{n}_l+1\\Big) \\Big(\\gamma \n \\bar{n}_g+\\bar{n}_l (2 \\gamma \\bar{n}_g-\\gamma \\bar{n}_l+\\bar{n}_l+1)\\Big), \\\\ \nB_3&:=B_2, \\\\ \nB_4&:=\\frac{1}{H}\\left(1-\\delta_{\\gamma,1}\\delta_{\\bar{n}_g,0}\\right)\n\\Big[ 2 \\gamma ^2 (\\bar{n}_g-\\bar{n}_l) \\big(2 \\bar{n}_g\n \\bar{n}_l+\\bar{n}_g-\\bar{n}_l^2\\big)+(2 \\bar{n}_l+1)\n (\\bar{n}_l+1)^2 \\\\\n &\\hspace{1cm}+\\gamma (\\bar{n}_l+1) \\Big(\\bar{n}_g (6\n \\bar{n}_l+4)-\\bar{n}_l (4 \\bar{n}_l+1)+1\\Big)\\Big], \\\\ \nD&:=\\frac{\\gamma}{H}\\left(1-\\delta_{\\gamma,1}\\delta_{\\bar{n}_g,0}\\right)\n(\\bar{n}_g-\\bar{n}_l), \\\\\nH&:=8\\gamma ^2 (\\bar{n}_g-\\bar{n}_l) \\big(2 \\bar{n}_g\n \\bar{n}_l+\\bar{n}_g-\\bar{n}_l^2\\big)+\\gamma (2 \\bar{n}_l+1)^2 (6\n \\bar{n}_g-4 \\bar{n}_l+1)+(2 \\bar{n}_l+1)^3, \\\\\nR_1&:=\\dfrac{1}{4}\\delta_{\\gamma,1}\\delta_{\\bar{n}_g,0}\\left(\\rho_{22}-\\rho_{23}-\\rho_{32}+\\rho_{33}\\right), \\\\\nR_2&:=\\dfrac{1}{2}\\delta_{\\gamma,1} \\delta_{\\bar{n}_g,0} \\left(\\rho_{24}-\\rho_{34}\\right),\\\\\nR_3&:=\\dfrac{1}{2}\\delta_{\\gamma,1} \\delta_{\\bar{n}_g,0} \\left(\\rho_{11}+\\rho_{44}\n+\\rho_{23}+\\rho_{32}+1\\right).\n\\end{aligned}\n\\label{eq:Coes}\n\\end{equation}\nHere it is\n\\begin{equation*}\n\\delta_{\\gamma,1}:=\\left\\{\n\\begin{array}{ccc}\n0 & & 0\\le\\gamma<1 \\\\\n1 & & \\gamma=1\n\\end{array}\\right., \n\\end{equation*}\nand \n\\begin{equation*}\n\\delta_{\\bar{n}_g,0}:=\\left\\{\n\\begin{array}{ccc}\n0 & & \\bar{n}_g >0 \\\\\n1 & & \\bar{n}_g=0\n\\end{array}\\right..\n\\end{equation*}\nNotice that the dependance from the initial state is shown by terms $R_1$, $R_2$, and $R_3$.\n\nWe can consider the evolution $\\rho(0)\\to \\rho(\\infty)$ \nas resulting from a (dissipative) map, namely\n\\begin{equation}\n\\rho(\\infty)={\\cal D}(\\rho(0)).\n\\label{eq:dynmap}\n\\end{equation}\nIn order to find its Kraus decomposition \\cite{Kraus83} we need to treat the case\n$\\gamma<1$ or $\\bar{n}_g>0$ separately from $\\gamma=1$ and $\\bar{n}_g=0$.\n\nIn the former the map ${\\cal D}$ has a fixed point\n\\begin{equation}\n\\rho_{_{fixed}}(\\infty)=\\begin{pmatrix}\n B_1 & 0 & 0 & 0 \\\\\n 0 & B_2 & D & 0 \\\\\n 0 & D & B_3 & 0 \\\\\n 0 & 0 & 0 & B_4\n \\end{pmatrix}, \\label{eq:staf} \n\\end{equation}\nhence the corresponding Kraus operators can be constructed as \n\\begin{equation}\nK^\\prime_{jl}=\\sqrt{\\upsilon_l} |\\psi_j\\rangle\\langle l|, \\qquad j,l=1 \\cdots 4,\n\\label{eq:krausstaind}\n\\end{equation}\nby means of the spectral decomposition $\\rho_{_{fixed}}(\\infty)=\\sum_{j=1}^4\\upsilon_j |\\psi_j\\rangle\\langle\\psi_j|$,\nwhere\n\\begin{equation}\n\\begin{aligned}\n\\ket{\\psi_1}&=\\begin{pmatrix}\n 1 \\\\\n 0 \\\\\n 0 \\\\\n 0 \n \\end{pmatrix}, \\quad \n \\ket{\\psi_2}=\\begin{pmatrix}\n 0 \\\\\n 0 \\\\\n 0 \\\\\n 1 \n \\end{pmatrix}, \\quad\n \\ket{\\psi_3}=\\begin{pmatrix}\n 0 \\\\\n -1 \\\\\n 1 \\\\\n 0 \n \\end{pmatrix}, \\quad\n \\ket{\\psi_4}=\\begin{pmatrix}\n 0 \\\\\n 1 \\\\\n 1 \\\\\n 0 \n \\end{pmatrix}, \n\\end{aligned}\n\\label{eq:eigendecomp}\n\\end{equation}\nand\n\\begin{equation}\n\\begin{aligned}\n\\upsilon_1&=B_1, \\\\ \n\\upsilon_2&=B_4, \\\\ \n\\upsilon_3&=B_2-D, \\\\ \n\\upsilon_4&=B_2+D. \n\\end{aligned}\n\\label{eq:eigenvalues}\n\\end{equation}\n\nIn contrast, for the case $\\gamma=1$ and $\\bar{n}_g=0$, the stationary state \n\\begin{equation}\n\\label{eq:dressed}\n\\rho_{_{ini}}(\\infty):=\\begin{pmatrix}\n 0 & 0 & 0 & 0 \\\\\n 0 & R_1 & -R_1 & R_2 \\\\\n 0 & -R_1 & R_1 & -R_2 \\\\\n 0 & R_2^{*} & -R_2^{*} & R_3\n \\end{pmatrix}\n\\end{equation}\ndepends on the initial state, hence in order to find the Kraus operators of the corresponding map\nwe need to first obtain them for the map\n\\begin{equation}\n\\rho(t)={\\cal D}_t(\\rho(0)),\n\\label{eq:dynmapt}\n\\end{equation}\nwhere the subscript $t$ emphasizes the parametrically dependence on time. \nThen take the limit $t\\to\\infty$. Implementing this procedure in Appendix \\ref{App:Appendixcoeff}, \nit results\n\\begin{equation}\n\\begin{aligned}\nK_1^{\\prime\\prime}&=\\dfrac{1}{2}\\begin{pmatrix}\n 0 & 0 & 0 & 0 \\\\\n 0 & 1 & -1 & 0 \\\\\n 0 & -1 & 1 & 0 \\\\\n 0 & 0 & 0 & 2\n \\end{pmatrix}, \\ \\ \\\nK_2^{\\prime\\prime}=\\begin{pmatrix}\n 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 \\\\\n 1 & 0 & 0 & 0\n \\end{pmatrix}, \\\\\nK_3^{\\prime\\prime}&=\\begin{pmatrix}\n 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0\n \\end{pmatrix}, \\ \\ \\\nK_4^{\\prime\\prime}=\\dfrac{1}{\\sqrt{2}}\\begin{pmatrix}\n 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 \\\\\n 0 & 1 & 1 & 0\n \\end{pmatrix}.\n\\end{aligned}\n\\label{eq:krausinf}\n\\end{equation}\n\nTaking into account both cases \\eqref{eq:krausstaind} and \\eqref{eq:krausinf}, the stationary state \\eqref{eq:stationarystate} can be written as\n\\begin{equation}\n\\rho(\\infty)=\\sum_{jl=1}^{4}K_{jl}\\rho(0) K_{jl}^{\\dagger},\n\\end{equation} \nwhere\n\\begin{equation}\nK_{jl}=\\left( 1-\\delta_{\\gamma,1}\\delta_{\\bar{n}_g,0}\\right)K^\\prime_{jl}\n+\\delta_{\\gamma,1}\\delta_{\\bar{n}_g,0} \\delta_{j,l}K^{\\prime\\prime}_j.\n\\end{equation} \n\n\n\n\\section{Entangling power}\\label{sec:entpower}\n\n\nIn what follows, we use the concurrence \\cite{Wootters1998} to quantify the amount of entanglement which is defined as \n\\begin{equation}\nE(\\rho):=\\max\\left\\lbrace 0, \\sqrt{\\ell_1}- \\sqrt{\\ell_2}- \\sqrt{\\ell_3}- \\sqrt{\\ell_4}\\right\\rbrace,\n\\label{eq:con}\n\\end{equation}\nwhere $\\ell_j$, $j=1,2,3,4$, are the eigenvalues (in decreasing order) of \n$\\rho\\left(\\sigma_1^y\\otimes\\sigma_2^y\\rho^{*}\\sigma_1^y\\otimes\\sigma_2^y\\right)$ with $\\rho^*$ the complex conjugate of $\\rho$ and $\\sigma_k^y:=i(\\sigma_k-\\sigma_k^\\dag)$. \n\nAssume the initial state of the system to be pure and factorable. Its general parametrization is\n\\begin{equation}\n\\begin{aligned}\n|\\psi(0)\\rangle&=\\left( \\cos(\\theta_1\/2) \\ket{e_1}+\\sin(\\theta_1\/2)e^{i\\varphi_1}\\ket{g_1}\\right) \\\\\n&\\otimes\\left( \\cos(\\theta_2\/2) \\ket{e_2}+\\sin(\\theta_2\/2)e^{i\\varphi_2}\\ket{g_2}\\right),\n\\end{aligned}\n\\label{eq:inifact}\n\\end{equation}\nwith $\\theta_k\\in\\left[ 0,\\pi\\right] $ and $\\varphi_k\\in\\left[ 0,2\\pi\\right] $ for $k=1,2$.\nThen, the concurrence \\eqref{eq:con} for the stationary state \\eqref{eq:stationarystate} becomes\n\\begin{equation}\\label{steadyconc}\n\\begin{aligned}\nE&=2\\left(\\left|D\\right|-\\sqrt{B_1 B_4}+|R_1|\\right)\\\\\n&=2\\left|D\\right|-2\\sqrt{B_1 B_4}\n+\\delta_{\\gamma,1}\\delta_{\\bar{n}_g,0}\\dfrac{1}{4}\n\\left| 1-\\cos\\theta_1\\cos\\theta_2-\\cos(\\varphi_1-\\varphi_2)\\sin\\theta_1\\sin\\theta_2\\right|,\n\\end{aligned}\n\\end{equation}\nIt is worth noticing that when $\\gamma=1$ and $\\bar{n}_g=0$ the third term contributes, while the second does not because $B_1=0$.\n\nQuite generally the stationary entanglement \\eqref{steadyconc} depends on the initial state. However, we can say that a map is a good entangler when the average of the final entanglement over all possible initial states is positive. Moving on from \\cite{Zanardi2000} we define the entangling power of ${\\cal D}$ as\n\\begin{equation}\n{\\mathfrak E}({\\cal D}):=\\int E\\left( {\\cal D} ( |\\psi(0)\\rangle\\langle \\psi(0)|)\\right) \\, d\\mu( |\\psi(0)\\rangle),\n\\label{eq:enpower}\n\\end{equation}\nwhere $ d\\mu( |\\psi(0)\\rangle)$ is the probability measure over the submanifold of factorable states in $\\mathbb{C}^2\\otimes \\mathbb{C}^2$. The latter is induced by the Haar measure of ${\\rm SU}(2) \\otimes {\\rm SU}(2)$. Specifically, referring to the parametrization of \\eqref{eq:inifact}, it reads\n\\begin{equation}\nd\\mu( |\\psi(0)\\rangle)=\\frac{1}{16\\pi^2}\\prod\\limits_{k=1}^2 \\sin\\theta_k\\text{d}\\theta_k\\text{d}\\varphi_k.\n\\end{equation}\nThis measure is normalized to 1. It is trivial to see that in this case the entangling power $\\mathfrak E$ lies within $[0,1]$.\nThus from \\eqref{steadyconc} and \\eqref{eq:enpower} we get \n\\begin{equation}\\label{epower}\n\\begin{aligned}\n\\mathfrak{E}=2\\left|D\\right|-2\\sqrt{B_1 B_4}\n+\\delta_{\\gamma,1}\\delta_{\\bar{n}_g,0}\\dfrac{1}{4}.\n\\end{aligned}\n\\end{equation}\nIn Fig.\\ref{fig:entpow3D0} it is shown the entangling power \\eqref{epower} as a function of $\\gamma$ and \n$\\bar{n}_l$ for $\\bar{n}_g=0$. There, we can see that for $\\gamma<1$ and $\\bar{n}_l=0$ it is $\\mathfrak{E}=0$. In contrast, for any value of $\\gamma>0.7$, there exists a nonzero optimal value of local thermal noise $\\bar{n}_l$ maximizing the entangling power; a phenomenon reminiscent of \\emph{stochastic resonance} effect \\cite{Gammaitoni1998}. Such an optimal value of noise tends to increase as $\\gamma$ approaches $1$ (as one can also argue from Fig.\\ref{fig:entpowregions}).\n\nWhen $\\gamma$ attains the value 1, the curve of $\\mathfrak{E}$ vs $\\bar{n}_l$ is shifted upward by an amount $1\/4$, as shown in Fig.\\ref{fig:entpowg1}, and the maximum value of $\\mathfrak{E}$, namely 7\/12, is asymptotically achieved at $\\bar{n}_l\\to\\infty$.\n\nBy increasing the value of $\\bar{n}_g$ from zero, the region $\\{\\gamma,\\bar{n}_l\\}$ of positive values of \n$\\mathfrak{E}$ shrinks and also the maxima lower, as can be readily seen in Figs.\\ref{fig:entpow3D001} and \\ref{fig:entpow3D01}.\nHence we can conclude that global thermal noise is detrimental for stationary entanglement, while a suitable amount of local thermal noise is vital.\n\nThis can be explained by considering local thermal baths as injecting excitations onto the systems incoherently. Hence each excitation, thanks to the interaction mimicked by the global environment, is then shared by the two qubits ending up into an entangled state resembling $\\frac{1}{\\sqrt{2}}(|e_1g_2\\rangle+|g_1e_2\\rangle)$. If however the local noise is too much, it blurs such effect. In contrast, when $\\bar{n}_g>0$ there is the tendency by the global bath to inject coherently two excitations onto the system which is then driven into a separable state resembling $|e_1 e_2\\rangle$. \n\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.6\\textwidth]{fig2.pdf}\n \\fcaption{Entangling power $\\mathfrak{E}$ as a function of $\\gamma$ and $\\bar{n}_l$ for $\\bar{n}_g=0$.\n The value of $\\mathfrak{E}$ for $\\gamma$ exactly equal to 1 is not reported here.}\n \\label{fig:entpow3D0}\n\\end{figure}\n\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{fig3.pdf}\n \\fcaption{Regions of the parameter space $\\{\\gamma,\\bar{n}_l\\}$ where the entangling power $\\mathfrak{E}$ is greater than zero (white) and zero (grey) for $\\bar{n}_g=0$. Along the dashed line $\\mathfrak E$ takes its maximum value. }\n \\label{fig:entpowregions}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{fig4.pdf}\n \\fcaption{Entangling power $\\mathfrak{E}$ as a function of $\\bar{n}_l$ \n for $\\bar{n}_g=0$ and $\\gamma=1$. The bottom curve resulting from the contribution of the first two terms in Eq.\\eqref{epower} is shifted upward due to the contribution of the third term in Eq.\\eqref{epower}.}\n \\label{fig:entpowg1}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.6\\textwidth]{fig5.pdf}\n \\fcaption{Entangling power $\\mathfrak{E}$ as a function of $\\gamma$ and $\\bar{n}_l$ \n for $\\bar{n}_g=0.01$.}\n \\label{fig:entpow3D001}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.6\\textwidth]{fig6.pdf}\n \\fcaption{Entangling power $\\mathfrak{E}$ as a function of $\\gamma$ and $\\bar{n}_l$ \n for $\\bar{n}_g=0.1$.}\n \\label{fig:entpow3D01}\n\\end{figure}\n\n\n\\section{Conclusions} \\label{sec:conclusion}\n\nIn this paper, we have considered a model of two qubits dissipating by a ``glocal\" map, i.e. into local and global environments (generally at finite temperatures). By the parameter $\\gamma$, this can interpolate between perfect local and perfect global regimes. \n\nWe have then determined conditions for the presence of long living entanglement.\nThis has been done by considering the entangling capabilities of the ``glocal\" dissipative map ${\\cal D}$ through the entangling power introduced in \\eqref{eq:enpower}. \n\nIt has been shown that the number of thermal excitations in the local environments has a crucial role on the stationary entanglement of the two qubits. \n It results on the one hand (and rather counterintuitively) that entanglement in presence of local environments is achievable when these are at nonzero temperature.\nThis represents a remarkable extension of the \\emph{stochastic resonance} effect \narising in spin chains from the interplay of local dissipative and dephasing noise sources in the presence of Hamiltonian couplings \\cite{Rivas2009}.\nHere it appears in a context lacking of Hamiltonian couplings and driven by the interplay of the same kind (dissipative) of noise sources.\nOn the other hand, while global environment is vital for indirect qubits interaction, it should be ideally at zero temperature. In fact thermal noise from global environment spoils entanglement.\n \nConcerning $\\mathfrak{E}({\\cal D})$ it is also worth noticing its sudden enhancement for zero temperature global dissipation (Fig. \\ref{fig:entpowg1}). This can be regarded as a signature of a kind of phase transition occurring at $\\gamma=1$.\n\nThe map ${\\cal D}$, thanks to the properties discussed in Section \\ref{sec:Kraus}, can also be considered\nas a quantum channel and characterized in terms of its information transmission capabilities \\cite{RMP14}. \nFor instance, when $\\gamma\\neq 1$ the output space is of dimension 1, hence its capacity\nvanishes. In contrast, when $\\gamma = 1$ and $\\bar{n}_g = 0$ the output space is of dimension 2 \n(spanned by $\\frac{1}{\\sqrt{2}}\\left(\\ket{e_1}\\ket{g_2}-\\ket{g_1}\\ket{e_2}\\right)$ and $\\ket{g_1}\\ket{g_2}$) \nand the capacity could be up to 1 bit or 1 qubit\n(depending on whether classical or quantum information is considered to be transmitted). \nFor this rather different behavior in the parametric region, it could also be considered as a paradigmatic model for channel discrimination \\cite{Pirs11}.\nThese investigations are left for future works.\n\nThe present study can be of interest for experimental situations where\nthe interplay of local and global environments is relevant.\nAs an example we may mention cavity QED experiments in which atomic qubits are confined inside a high finesse optical cavity \\cite{Guthahrlein2001} and experience local spontaneous emission as well as\na global effect of vacuum bath lying outside the cavity.\nAnother example is provided by charge qubits based on double quantum dots (or analogously Cooper pair boxes) \\cite{CPA08} with local electron environments and global environment arising from voltage fluctuations.\n\n\\nonumsection{Acknowledgements}\n\\noindent\nA.N. would like to thank the University of Camerino\nfor kind hospitality and the Ministry of Science, Research\nand Technology of Iran for financial support.\n\n\n\\nonumsection{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbkui b/data_all_eng_slimpj/shuffled/split2/finalzzbkui new file mode 100644 index 0000000000000000000000000000000000000000..8c2dbc30c02e31573d9a5e7a245805395fcf1487 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbkui @@ -0,0 +1,5 @@ +{"text":"\\section{#1}\\setcounter{equation}{0}}\n\\newcommand{\\ra}{\\rangle}\n\\newcommand{\\la}{\\langle}\n\\newcommand{\\p}{\\partial}\n\\newcommand{\\hp}{{\\Phi}}\n\\newcommand{\\hq}{{Q_B}}\n\\newcommand{\\he}{{\\eta_0}}\n\\newcommand{\\ha}{{{A}}}\n\\newcommand{\\rrr}{\\big\\rangle\\big\\rangle}\n\\newcommand{\\lllb}{\\Bigl\\langle\\Bigl\\langle}\n\\newcommand{\\rrrb}{\\Bigr\\rangle\\Bigr\\rangle}\n\\allowdisplaybreaks\n\n\\makeatother\n\n\\usepackage{babel}\n\\begin{document}\n{}~ \\hfill\\vbox{\\hbox{CTP-SCU\/2019008}}\\break\n\\vskip 3.0cm\n\\centerline{\\Large \\bf Are nonperturbative AdS vacua possible in bosonic string theory?}\n\n\n\\vspace*{10.0ex}\n\\centerline{\\large Peng Wang, Houwen Wu and Haitang Yang}\n\\vspace*{7.0ex}\n\\vspace*{4.0ex}\n\\centerline{\\large \\it College of physics}\n\\centerline{\\large \\it Sichuan University}\n\\centerline{\\large \\it Chengdu, 610065, China} \\vspace*{1.0ex}\n\\vspace*{4.0ex}\n\n\n\\centerline{pengw@scu.edu.cn, iverwu@scu.edu.cn, hyanga@scu.edu.cn}\n\\vspace*{10.0ex}\n\\centerline{\\bf Abstract} \\bigskip \\smallskip\nIn this paper, following the work of Hohm and Zwiebach [arXiv:1905.06583], we show that in bosonic string theory nonperturbative anti-de Sitter (AdS) vacua could exist with all $\\alpha^{\\prime}$ corrections included. We also discuss the possibility of the coexistence of nonperturbative dS and AdS vacua.\n\n\\vfill\n\\eject\n\\baselineskip=16pt\n\\vspace*{10.0ex}\n\nWhether bosonic string theory permits stable de Sitter (dS) or anti-de Sitter (AdS) vacua is a long-standing unsolved problem.\nThere are conjectures that superstring theory does not have solutions of dS vacua \\cite{Obied:2018sgi,Agrawal:2018own,Garg:2018reu}.\nAnother conjecture states that there is no stable nonsupersymmetric AdS vacuum with fluxes \\cite{Ooguri:2016pdq}. Incredibly, by analyzing the nonperturbative properties of the spacetime action of closed string theory (also known as the low-energy effective string theory), Hohm and Zwiebach \\cite{Hohm:2019ccp,Hohm:2019jgu}\nrecently showed that \\emph{nonperturbative} dS vacua are possible in bosonic string theory. The most important ingredients in their arguments are $O(d,d)$ symmetry and the classification of all of the $\\alpha'$ corrections for particular configurations.\n\nIt is well known from the work of Meissner and Veneziano \\cite{Veneziano:1991ek} in 1991 that, at the zeroth order of $\\alpha'$, when all fields depend only on time, the $D=d+1$-dimensional spacetime action of closed string theory reduces to an $O(d,d)$-invariant \\emph{reduced action}. Soon after that, in Refs. \\cite{Sen:1991zi,Sen:1991cn} Sen extended this result to full string field theory. Specifically, by considering an exact solution of the string field that was independent of $m$-dimensional spacetime coordinates ($m\\le d$)\\footnote{We concentrate on noncompact configurations here.} , Sen proved the following. (i) The space of such solutions has an $O(m,m)$ symmetry. In the language of low energy effective theory, the reduced action derived from such solutions possesses an $O(m,m)$ symmetry to all orders in $\\alpha'$. (ii) The $m$ coordinates could be all spacelike or include one timelike coordinate, as explained in Ref. \\cite{Sen:1991cn}. (iii) In the solution space, inequivalent solutions are connected by nondiagonal $O(m)\\otimes O(m)$ transformations [$O(m-1,1)\\otimes O(m-1,1)$ if one of the $m$ coordinates is timelike]. (iv) Other generators of $O(m,m)$ outside of the nondiagonal $O(m)\\otimes O(m)$ [or $O(m-1,1)\\otimes O(m-1,1)$] generate gauge transformations accompanied by a shift of the dilaton, and thus equivalent solutions. On the other hand, in Ref. \\cite{Meissner:1991zj}, from the perspective of $\\sigma$ model expansion, since the nilpotency of the BRST operator $Q$ is not altered by an $O(d,d)$ transformation, it was argued that the $O(d,d)$ symmetry should persist at all orders in $\\alpha'$ for the reduced action. It is expected that, in terms of the standard fields, the $O(d,d)$ transformations receive higher-order $\\alpha'$ corrections when introducing higher-derivative terms to the reduced action. For configurations depending only on time, to the first order in $\\alpha'$, in Ref. \\cite{Meissner:1996sa}, Meissner demonstrated that one can trade it with standard $O(d,d)$ transformations in terms of $\\alpha'$-corrected fields. In the appendix we show that this is also true for configurations that only depend on one spatial coordinate $x$, i.e., the case we study in this paper.\n\n\nAs for the yet unknown higher-order $\\alpha'$ corrections, some important progress has been made recently using the formalism of double field theory \\cite{Hohm:2013jaa,Hohm:2014xsa,Marques:2015vua,Hohm:2016lge,Hohm:2015doa,Baron:2018lve}. Remarkably, in Ref. \\cite{Hohm:2019jgu, Hohm:2015doa} Hohm and Zwiebach demonstrated that, for cosmological, purely time-dependent configurations, the $O(d,d)$-covariant closed string spacetime action can be expressed in a very simple form. All orders of $\\alpha^{\\prime}$ corrections do not include\nthe trivial dilaton and can be constructed using even powers of\n$\\partial_{t}\\mathcal{S}$, where $\\mathcal{S}$ is the spatial part\nof the generalized metric defined in Eq. (\\ref{M}). This surprising simplification of the $\\alpha^{\\prime}$ corrections enabled them to discuss the nonperturbative solutions. The most interesting result they obtained is that nonperturbative dS vacua are possible for bosonic string theory \\cite{Hohm:2019ccp,Hohm:2019jgu}, which possibly provides a cornerstone for the connection between string theory and our real world. In this paper, following their derivations, we show that nonperturbative AdS vacua are also possible with all $\\alpha'$ corrections for bosonic string theory.\n\nIt is worth noting that we work in the string frame and not the Einstein frame, the same in Hohm and Zwiebach's work \\cite{Hohm:2019ccp,Hohm:2019jgu}. It is still unclear if there could be dS or AdS solutions in the Einstein frame, since when we substitute the\nsolutions with a constant dilaton field $\\phi$ and Hubble parameter\n$\\bar{H}_{0}$ back into the Einstein frame, $\\bar{H}_{0}^{E}$ goes to\nzero and the metric becomes flat. Another issue is that in order to completely determine the dS\/AdS vacua, we still need to know all of the $\\alpha'$ corrections. One of the purposes of this paper is to deny the nonexistence of AdS vacua in bosonic string theory, rather than provide exact solutions.\n\n\nFor the sake of completeness, let us briefly summarize Hohm and Zwiebach's work on nonperturbative dS vacua. Details can be found in Ref. \\cite{Hohm:2019jgu}. To the zeroth order of $\\alpha'$, the $D=d+1$-dimensional spacetime action of closed string theory is\n\n\n\n\n\n\n\n\n\\begin{equation}\nI_{0}\\equiv\\int d^{D}x\\sqrt{-g}e^{-2\\phi}\\left[R+4\\left(\\partial_{\\mu}\\phi\\right)^{2}-\\frac{1}{12}H_{ijk}H^{ijk}\\right],\n\\end{equation}\n\n\\noindent where $g_{\\mu\\nu}$ is the string metric, $\\phi$ is the\ndilaton and $H_{ijk}=3\\partial_{\\left[i\\right.}b_{\\left.jk\\right]}$\nis the field strength of the antisymmetric Kalb-Ramond $b_{ij}$\nfield. For cosmological backgrounds, choosing the synchronous\ngauge $g_{tt}=-1$, $g_{ti}=b_{t\\mu}=0$,\n\n\\begin{equation}\ng_{\\mu\\nu}=\\left(\\begin{array}{cc}\n-1 & 0\\\\\n0 & G_{ij}\\left(t\\right)\n\\end{array}\\right),\\qquad b_{\\mu\\nu}=\\left(\\begin{array}{cc}\n0 & 0\\\\\n0 & B_{ij}\\left(t\\right)\n\\end{array}\\right),\\qquad\\phi=\\phi\\left(t\\right),\n\\end{equation}\n\n\\noindent and defining the $O\\left(d,d\\right)$ dilaton $\\Phi$ as\n\n\\noindent\n\\begin{equation}\ne^{-\\Phi}=\\sqrt{g}e^{-2\\phi},\n\\end{equation}\n\n\\noindent the action can be rewritten as\n\n\\noindent\n\\begin{equation}\nI_{0}=\\int dte^{-\\Phi}\\left[-\\dot{\\Phi}^{2}-\\frac{1}{8}\\mathrm{Tr}\\left(\\dot{\\mathcal{S}}^{2}\\right)\\right],\\label{ST action}\n\\end{equation}\n\n\\noindent with\n\n\\noindent\n\\begin{equation}\nM=\\left(\\begin{array}{cc}\nG^{-1} & -G^{-1}B\\\\\nBG^{-1} & G-BG^{-1}B\n\\end{array}\\right),\\qquad\\mathcal{S}=\\eta M=\\left(\\begin{array}{cc}\nBG^{-1} & G-BG^{-1}B\\\\\nG^{-1} & -G^{-1}B\n\\end{array}\\right),\\label{M}\n\\end{equation}\n\n\\noindent where $M$, a $2d\\times 2d$ matrix, is the spatial part of the generalized metric $\\mathcal{H}$ of\ndouble field theory, $\\dot{A}\\equiv\\partial_{t}A$, and $\\eta$ is the invariant metric of the $O\\left(d,d\\right)$\ngroup\n\n\\noindent\n\\begin{equation}\n\\eta=\\left(\\begin{array}{cc}\n0 & I\\\\\nI & 0\n\\end{array}\\right).\n\\end{equation}\n\n\n\n\\noindent Noticing that $M$ is symmetric and $\\mathcal{S}=\\mathcal{S}^{-1}$, this action\nis manifestly invariant under the $O\\left(d,d\\right)$ transformations\n\n\\noindent\n\\begin{equation}\n\\Phi\\longrightarrow\\Phi,\\qquad\\mathcal{S}\\longrightarrow\\tilde{\\mathcal{S}}=\\Omega^{T}\\mathcal{S}\\Omega,\\label{O(d,d) trans}\n\\end{equation}\n\n\\noindent where $\\Omega$ is a constant matrix, satisfying\n\n\\noindent\n\\begin{equation}\n\\Omega^{T}\\eta\\Omega=\\eta.\n\\end{equation}\n\n\n\n\\noindent If we choose the Friedmann-Lemaitre-Robertson-Walker (FLRW) metric, $G_{ij}=\\delta_{ij}a^{2}\\left(t\\right)$, and a\nvanishing Kalb-Ramond field $B=0$:\n\n\\begin{equation}\nds^{2}=-dt^{2}+a^{2}\\left(t\\right)\\delta_{ij}dx^{i}dx^{j},\n\\label{FLRW}\n\\end{equation}\n\n\\noindent the matrix $\\mathcal{S}$ becomes\n\n\\noindent\n\\begin{equation}\n\\mathcal{S}=\\left(\\begin{array}{cc}\n0 & G\\\\\nG^{-1} & 0\n\\end{array}\\right).\n\\end{equation}\n\n\\noindent Applying the $O(d)\\otimes O(d)$ transformation, we\nobtain a new solution,\n\n\\begin{equation}\n\\tilde{\\mathcal{S}}=\\left(\\begin{array}{cc}\n0 & G^{-1}\\\\\nG & 0\n\\end{array}\\right),\n\\end{equation}\n\n\\noindent which implies that the action is invariant under $a\\left(t\\right)\\rightarrow a^{-1}\\left(t\\right)$, which is known as the scale-factor duality in traditional string\ncosmology. The next step is to include the $\\alpha^{\\prime}$\ncorrections. Benefiting from the $O\\left(d,d\\right)$ invariance, the corrections\nare classified by even powers of $\\dot{\\mathcal{S}}$ only:\n\n\\begin{eqnarray}\nI & = & \\int d^{D}x\\sqrt{-g}e^{-2\\phi}\\left(R+4\\left(\\partial\\phi\\right)^{2}-\\frac{1}{12}H^{2}+\\frac{1}{4}\\alpha^{\\prime}\\left(R^{\\mu\\nu\\rho\\sigma}R_{\\mu\\nu\\rho\\sigma}+\\ldots\\right)+{\\alpha'}^2(\\ldots)+\\ldots \\right),\\label{eq:original action with alpha}\\\\\n & = & \\int dte^{-\\Phi}\\left(-\\dot{\\Phi}^{2}+ {\\sum_{k=1}^{\\infty}}\\left(\\alpha^{\\prime}\\right)^{k-1}c_{k}\\mathrm{tr}\\left(\\dot{\\mathcal{S}}^{2k}\\right)\\right).\\label{eq:Odd action with alpha}\n\\end{eqnarray}\n\n\\noindent Eq. (\\ref{eq:original action with alpha}) is the action for the general background\nwith all $\\alpha^{\\prime}$ corrections. Eqn (\\ref{eq:Odd action with alpha}) is\nthe $O\\left(d,d\\right)$-covariant action applied to the metric (\\ref{FLRW}) with $B=0$,\nwhere $c_{1}=-\\frac{1}{8}$\nto recover Eq. (\\ref{ST action}) and $c_{k\\geq2}$ are\nundetermined constants. Using the action (\\ref{eq:Odd action with alpha}), in Refs. \\cite{Hohm:2019ccp,Hohm:2019jgu} Hohm and Zwiebach showed that nonperturbative\ndS vacua are permitted for infinitely many classes of $c_{k\\geq2}$, namely, $a(t)=e^{H_0 t}$ with $H_0\\not= 0$.\n\nNow we want to investigate if nonperturbative AdS vacua are also allowed. To address this question, an appropriate ansatz is crucial. We take the ansatz\n\n\\begin{equation}\nds^{2}=-a^{2}\\left(x\\right)dt^{2}+dx^{2}+a^{2}\\left(x\\right)\\left(dy^{2}+dz^{2}+\\ldots\\right),\\label{eq:BH metric}\n\\end{equation}\n\n\\noindent whose metric components depend on a single space direction, say, $x$. The dimensionality is still $D=d+1$. As we explained earlier, in Refs. \\cite{Sen:1991zi,Sen:1991cn} Sen proved that the reduced action based on such solutions also maintains $O(d,d)$ symmetry to all orders in $\\alpha'$. The coset for the space of such solutions is $O(d-1,1)\\otimes O(d-1,1)\/O(d-1,1)$, in contrast to $O(d)\\otimes O(d)\/O(d)$ for the cosmological solutions (\\ref{FLRW}). In the Appendix we explicitly show that for this ansatz the spacetime action (\\ref{eq:original action with alpha}) also possesses the standard $O(d,d,R)$ symmetry and can be reduced to\n\n\\begin{equation}\n\\bar{I}= -\\int dxe^{-\\Phi}\\left(-\\Phi^{\\prime2}+ {\\sum_{k=1}^\\infty}\\left(\\alpha^{\\prime}\\right)^{k-1}\\bar{c}_{k}\\mathrm{tr}\\left(\\mathcal{M}^{\\prime2k}\\right)\\right),\\label{eq: Odd with alpha x}\n\\end{equation}\n\n\\noindent where $A^{\\prime}\\equiv\\partial_{x}A$. Note the overall minus sign and the fact that we have a new set of undetermined coefficients $\\bar c_k$ other than the $c_k$'s in Eq. (\\ref{eq:Odd action with alpha}). It turns out that\n\n\\begin{equation}\n\\bar{c}_{2k-1}=c_{2k-1},\\quad\\quad \\bar{c}_{2k}=-c_{2k},\\quad\\quad \\mathrm{for}\\quad k=1,2,3\\ldots \\label{eq:coeff relation}\n\\end{equation}\n\n\\noindent and\n\n\\begin{equation}\n\\mathcal{M}=\\left(\\begin{array}{cccc}\n0 & 0 & -a^{2}\\left(x\\right) & 0\\\\\n0 & 0 & 0 & a^{2}\\left(x\\right)\\delta_{ij}\\\\\n-a^{-2}\\left(x\\right) & 0 & 0 & 0\\\\\n0 & a^{-2}\\left(x\\right)\\delta_{ij} & 0 & 0\n\\end{array}\\right),\n\\end{equation}\n\n\\noindent where $i,j=y,z,\\ldots$. The equations of motion (EOM) of (\\ref{eq: Odd with alpha x}) can be calculated directly,\n\n\\begin{eqnarray}\n\\Phi^{\\prime\\prime}+\\frac{1}{2}\\bar{H}\\bar{f}\\left(\\bar{H}\\right) & = & 0,\\nonumber \\\\\n\\frac{d}{dx}\\left(e^{-\\Phi}\\bar{f}\\left(\\bar{H}\\right)\\right) & = & 0,\\nonumber \\\\\n(\\Phi^{\\prime})^2+\\bar{g}\\left(\\bar{H}\\right) & = & 0,\\label{eq:EOM}\n\\end{eqnarray}\n\n\\noindent where\n\n\\begin{eqnarray}\n\\bar{H}\\left(x\\right) & = & \\frac{a^{\\prime} \\left(x\\right)}{a\\left(x\\right)},\\nonumber \\\\\n\\bar{f}\\left(\\bar{H}\\right) & = & d{\\sum_{k=1}^\\infty}\\left(-\\alpha^{\\prime}\\right)^{k-1} 2^{2\\left(k+1\\right)}k\\bar{c}_{k}\\bar{H}^{2k-1},\\nonumber \\\\\n\\bar{g}\\left(\\bar{H}\\right) & = & d {\\sum_{k=1}^\\infty} \\left(-\\alpha^{\\prime}\\right)^{k-1} 2^{2k+1}\\left(2k-1\\right)\\bar{c}_{k} \\bar{H}^{2k}.\\label{eq:EOM fh gh}\n\\end{eqnarray}\n\n\\noindent It is easy to see that $\\bar g'(\\bar H)=\\bar H \\bar f'(\\bar H)$. Note that $\\bar{H}\\left(x\\right)$ is not the Hubble parameter since our background is space dependent.\n\nNow, let us check whether there is a solution $a^{2}\\left(x\\right)=e^{2\\bar{H}_{0}x}$\nfor the EOM (\\ref{eq:EOM}) such that\n\n\\begin{equation}\nds^{2}=-e^{2\\bar{H}_{0}x}dt^{2}+dx^{2}+e^{2\\bar{H}_{0}x}\\left(dy^{2}+dz^{2}+\\ldots\\right).\\label{eq:ads solution}\n\\end{equation}\n\n\\noindent The scalar curvature of this metric is\n\n\\begin{equation}\nR=-D\\left(D-1\\right)\\bar{H}_{0}^{2},\n\\label{eq:Ricci Scalar}\n\\end{equation}\n\n\\noindent which implies that the metric (\\ref{eq:ads solution}) is\nan AdS background for constant $\\bar{H}_{0}\\neq0$. To see this more clearly, we apply the transformation $x \\to - \\log[\\bar H_0 \\xi]\/\\bar H_0$ and recover the familiar Poincare coordinate\n\n\\begin{equation}\nds^2 = \\frac{1\/\\bar H_0^2}{\\xi^2}\\big(-dt^2 +d\\xi^2 + dy^2+dz^2+\\ldots\\big).\n\\end{equation}\nSo, we have $\\bar H_0=1\/R_{AdS}$, consistent with Eq.(\\ref{eq:Ricci Scalar}). If we do not include the $\\alpha^{\\prime}$ corrections,\n$\\bar{f}\\left(\\bar{H}\\right)\\sim \\bar H_0$ is a constant. From the second equation\nof (\\ref{eq:EOM}), we can figure out that $\\Phi$ is a constant.\nTherefore, to satisfy the first equation of (\\ref{eq:EOM}), we must have\n$\\bar{H}_{0}=0$, and thus the metric (\\ref{eq:ads solution}) becomes\nflat and there is no AdS solution. One thus concludes that there\nis no $D$-dimensional AdS vacuum without fluxes and $\\alpha^{\\prime}$\ncorrections.\n\nOur aim is to search for solutions with constant $\\bar{H}_{0}\\neq0$. Considering\nthe effects of $\\alpha^{\\prime}$ corrections, if there is a nonvanishing\n$\\bar{H}_{0}$ solution, $\\bar{f}\\left(\\bar{H}_{0}\\right)$ is\na constant and then $\\Phi$ is also a constant from the second equation\nof (\\ref{eq:EOM}). Finally, from the first and third equations of (\\ref{eq:EOM}), we obtain the condition for a nonvanishing $\\bar{H}_{0}$ solution:\n\n\\begin{equation}\n\\bar{f}\\left(\\bar{H}_{0}\\right)=\\bar{g}\\left(\\bar{H}_{0}\\right)=0.\\label{eq:fh condition}\n\\end{equation}\n\n\nLet us determine the general form of $\\bar{f}\\left(\\bar{H}\\right)$ with specific choices for $c_{k\\geq2}$\nthat satisfy the condition (\\ref{eq:fh condition}). Instead of $\\bar{f}\\left(\\bar{H}\\right)$,\nit is better to consider its integral,\n\n\\begin{equation}\n\\bar{F}\\left(\\bar{H}\\right)\\equiv\\int_{0}^{\\bar{H}}f\\left(\\bar{H}^{\\prime}\\right)d\\bar{H}^{\\prime}.\n\\end{equation}\n\n\\noindent The condition $\\bar{f}\\left(\\bar{H}_{0}\\right)=\\bar{g}\\left(\\bar{H}_{0}\\right)=0$\nis replaced by\n\n\\begin{equation}\n\\bar{F}\\left(\\bar{H}_{0}\\right)=\\bar{F}^{\\prime}\\left(\\bar{H}_{0}\\right)=0.\n\\end{equation}\n\n\\noindent It is then easy to understand that the general form\n\n\\begin{equation}\n\\bar{F}\\left(\\bar{H}\\right)=-d\\bar{H}^{2}\\left(1+ {\\sum_{p=1}^{\\infty}}\\bar{d}_{p}\\left(\\alpha^{\\prime}\\right)^{p} \\bar{H}^{2p}\\right){\\prod_{i=1}^k}\\left(1-\\left(\\frac{\\bar{H}}{\\bar{H}_{0}^{\\left(i\\right)}}\\right)^{2}\\right)^{2}, \\label{eq:Fform}\n\\end{equation}\n\n\\noindent admits $2k$ solutions of AdS vacua: $\\pm\\bar{H}_{0}^{\\left(1\\right)},\\ldots,\\pm\\bar{H}_{0}^{\\left(k\\right)}$, for an arbitrary integer $k>0$. So, the question is: do the coefficients $\\bar c_k$ support the functional form (\\ref{eq:Fform}) ? Although it appears impossible to obtain a definite answer, the bottom line is that $\\alpha^{\\prime}$ corrections do support the possibility of nonperturbative AdS vacua. It is worth noting that ``nonperturbative''\nhere means that we use all $\\alpha^{\\prime}$ corrections to\nobtain the solution but not to obtain the solution from the two-derivative\nequations and then be $\\alpha^{\\prime}$ corrected.\nThere exists another ``stronger'' version of ``nonperturbative'', namely\n$\\bar{F}\\left(\\bar{H}\\right)$ cannot be expressed by a series expansion of $\\alpha'$ \\cite{Krishnan:2019mkv}. The same scenario occurs for the dS vacua as explained in Refs. \\cite{Hohm:2019ccp,Hohm:2019jgu}.\n\n\nHowever, the real story may be intriguing. As an illustration, let us assume that all orders of $\\alpha^{\\prime}$ corrections have a very special form that gives\n\n\\begin{equation}\n\\bar{f}\\left(\\bar{H}\\right)=-\\frac{2d}{\\sqrt{\\alpha^{\\prime}}}\\sin\\left(\\sqrt{\\alpha^{\\prime}}\\bar{H}\\right)=-2d {\\sum_{k=1}^{\\infty}}\\left(\\alpha^{\\prime}\\right)^{k-1}\\frac{1} {\\left(2k-1\\right)!}H^{2k-1}.\\label{eq:fh sin}\n\\end{equation}\n\n\\noindent This functional form is a valid candidate for Eq. (\\ref{eq:EOM fh gh}) for\nspecial choices of $\\bar c_{k\\geq2}$. It includes all orders of $\\alpha'$ corrections and evidently is nonperturbative.\nThe solutions satisfying the condition $\\bar{f}\\left(\\bar{H}_{0}\\right)=0$ are\n\\begin{equation}\n\\sqrt{\\alpha^{\\prime}}\\bar{H}_{0}=2\\pi,4\\pi,\\ldots\n\\end{equation}\n\n\\noindent It is easy to check that $\\bar{g}\\left(\\bar{H}_{0}\\right)=0$ is also satisfied for these solutions, leading to a discrete infinity of AdS vacua. However, note the coefficient relations (\\ref{eq:coeff relation}) between dS from Eq. (\\ref{eq:Odd action with alpha}) and AdS from Eq. (\\ref{eq: Odd with alpha x}):\n$\\bar{c}_{2k+1}=c_{2k+1}$, $\\bar{c}_{2k}=-c_{2k}$. We can immediately see that if $\\bar{f}\\left(\\bar{H}\\right)\\sim\\sin\\left(\\sqrt{\\alpha^{\\prime}}\\bar{H}\\right)$ for AdS, then the corresponding function\n$f\\left(H\\right)\\sim\\sinh\\left(\\sqrt{\\alpha^{\\prime}}H\\right)$ in dS, and vice versa. But the $sinh$ function has no nontrivial zero. So, for the trial function (\\ref{eq:fh sin}), AdS or dS vacua cannot coexist and only one of them survives.\n\n\n\nThis looks like merely a coincidence since\nin any case, one could use a general form of Eq. (\\ref{eq:Fform}) to\npermit AdS or dS vacua. But we have some reasons to conjecture that by plugging the dS (AdS) metric into the yet unknown\ninfinite $\\alpha^{\\prime}$ expansion, one could sum the series\ninto an expression including a factor that is very close to the trial function of Eq. (\\ref{eq:fh sin})\n\\footnote{We want to emphasize that the real functional form of $\\bar{f}\\left(\\bar{H}\\right)$ could be more complicated than Eq. (\\ref{eq:fh sin}). We simply use this toy model to discuss the coexistence of nonperturbative dS and AdS vacua.}. In Ref. \\cite{Wang:2017mpm}, we showed that,\nwhen expressed in Riemann normal coordinates, the AdS (dS) metric\ncan be expressed in a simple form, which is called the $J$-factor by some mathematicians.\nTo see this explicitly, by considering the nonlinear sigma model of\nstring theory\n\n\\begin{equation}\nS=-\\frac{1}{4\\pi\\alpha'}\\int_{\\Sigma}g_{ij}(X)\\partial_{\\alpha}X^{i}\\partial^{\\alpha}X^{j},\n\\end{equation}\n\n\\noindent we can expand $X^{i}$ at some point $\\bar{x}$, say, $X^{i}\\left(\\tau,\\sigma\\right)=\\bar{x}^{i}+\\sqrt{\\alpha^{\\prime}}\\mathbb{Y}^{i}\\left(\\tau,\\sigma\\right)$,\nwhere the $\\mathbb{Y}^{i}$'s are dimensionless fluctuations. Locally\naround any point, one can always pick Riemann normal coordinates\n\n\\begin{eqnarray}\ng_{ij}\\left(X\\right) & = & \\eta_{ij}+\\frac{\\ell_{s}^{2}}{3}R_{iklj}\\mathbb{Y}^{k}\\mathbb{Y}^{l}+\\frac{\\ell_{s}^{3}}{6}D_{k}R_{ilmj}\\mathbb{Y}^{k}\\mathbb{Y}^{l}\\mathbb{Y}^{m}\\nonumber \\\\\n & & +\\frac{\\ell_{s}^{4}}{20}\\left(D_{k}D_{l}R_{imnj}+\\frac{8}{9}R_{iklp}R_{\\;mnj}^{p}\\right)\\mathbb{Y}^{k}\\mathbb{Y}^{l}\\mathbb{Y}^{m}\\mathbb{Y}^{n}+\\ldots.\n\\end{eqnarray}\n\n\\noindent When the background is maximally symmetric, the expansion is greatly simplified and can\nbe summed into a closed form. For dS, we have\n\n\\begin{equation}\nS_{dS}=-\\frac{1}{4\\pi}\\int_{\\Sigma}\\partial\\mathbb{Y}^{i}\\partial\\mathbb{Y}^{j}\\left[\\frac{\\sin^{2}\\left(\\frac{\\sqrt{\\alpha'}}{R_{dS}}\\mathbb{W}\\right)}{\\left(\\frac{\\sqrt{\\alpha'}}{R_{dS}}\\mathbb{W}\\right)^{2}}\\right]^{a}\\,_{i}\\,\\eta_{aj}\\,,\\qquad\\left(\\mathbb{W}^{2}\\right)_{\\quad b}^{a}\\equiv\\delta_{b}^{a}\\mathbb{Y}^{2}-\\mathbb{Y}^{a}\\mathbb{Y}_{b}.\n\\end{equation}\n\n\\noindent If the background is AdS, we get\n\n\\begin{equation}\nS_{AdS}=-\\frac{1}{4\\pi}\\int_{\\Sigma}\\partial\\mathbb{Y}^{i}\\partial\\mathbb{Y}^{j}\\left[\\frac{\\sinh^{2}\\left(\\frac{\\sqrt{\\alpha'}}{R_{AdS}}\\mathbb{W}\\right)}{\\left(\\frac{\\sqrt{\\alpha'}}{R_{AdS}}\\mathbb{W}\\right)^{2}}\\right]^{a}\\,_{i}\\,\\eta_{aj}\\,,\\qquad\\left(\\mathbb{W}^{2}\\right)_{\\quad b}^{a}\\equiv\\delta_{b}^{a}\\mathbb{Y}^{2}-\\mathbb{Y}^{a}\\mathbb{Y}_{b}.\n\\end{equation}\n\n\\noindent Noting that $H_{0}\\sim1\/R_{dS}$ and $\\bar{H}_{0}\\sim1\/R_{AdS}$,\nthe results strongly suggest that the beta functions or EOMs of\nthese two actions $S_{dS}$ and $S_{AdS}$ may behave very similarly to $f\\left(H\\right)\\sim\\sin\\left(\\sqrt{\\alpha^{\\prime}}H\\right)$\nand $\\bar{f}\\left(\\bar{H}\\right)\\sim\\sinh\\left(\\sqrt{\\alpha^{\\prime}}\\bar{H}\\right)$,\nor, equivalently speaking, there are nonperturbative dS vacua but not\nnonperturbative AdS vacua, or vice versa. So it looks like we still need more information about the $\\alpha'$ corrections to give a definite answer.\n\nFinally, we wish to remark that we have only considered the string\nmetric. The relation between the Einstein metric $g_{\\mu\\nu}^{E}$ and\nstring metric $g_{\\mu\\nu}$ is $g_{\\mu\\nu}^{E}=e^{-\\frac{4\\phi}{D-2}}g_{\\mu\\nu}$.\nWhen we substitute our solution with a constant $\\phi$ and $\\bar{H}_{0}= 0$\nback into the Einstein frame, $\\bar{H}_{0}^{E}$ goes to zero and the metric\nbecomes flat. This implies that there is no dS or AdS vacuum when $\\phi$\nis a constant in the Einstein frame without $\\alpha'$ corrections.\n\n\\vspace{5mm}\n\n\\noindent {\\bf Acknowledgements}\nWe are deeply indebted to Olaf Hohm and Barton Zwiebach for illuminating discussions and advice. We are also grateful to Hiroaki Nakajima, Bo Ning, Shuxuan Ying for very helpful discussions and suggestions. This work is supported in part by the National Natural Science Foundation of China (Grants No. 11875196, 11375121 and 11005016).\n\n\n\n\\section*{Appendix}\n\nThis appendix has two purposes. The first is to explicitly show that, at the leading order in $\\alpha'$, for our ansatz (\\ref{eq:BH metric}) the $O(d,d)$ symmetry of the spacetime action can be expressed in the standard form in terms of $\\alpha'$-corrected fields. The derivations follow the same pattern as the calculations in Refs. \\cite{Veneziano:1991ek,Meissner:1996sa}, except for some minus signs in particular places that account for the difference between time and space coordinates.\n\nThe second purpose is to briefly demonstrate that, based on our ansatz, the closed string spacetime action reduces to Eqs. (\\ref{eq: Odd with alpha x}-\\ref{eq:coeff relation}). The derivations are completely parallel to those in Ref. \\cite{Hohm:2019jgu}. Extra minus signs show up in the coefficients $c_k$ of the $\\alpha'$ expansion.\n\n\n\n\n\\subsection*{Zeroth order of $\\alpha^{\\prime}$}\n\n\\noindent We start with the tree-level closed string spacetime action without\n$\\alpha^{\\prime}$ corrections\n\n\n\\begin{equation}\nI_{0}\\equiv\\int d^{D}x\\sqrt{-g}e^{-2\\phi}\\left[R+4\\left(\\partial_{\\mu}\\phi\\right)^{2}-\\frac{1}{12}H_{ijk}H^{ijk}\\right],\\label{eq:Polyakov}\n\\end{equation}\n\n\\noindent where $g_{\\mu\\nu}$ is the string metric, $\\phi$ is the\ndilaton and $H_{ijk}=3\\partial_{\\left[i\\right.}b_{\\left.jk\\right]}$\nis the field strength of the antisymmetric Kalb-Ramond field $b_{ij}$.\nThe ansatz we use is\n\n\\begin{equation}\nds^{2}=-a^{2}\\left(x\\right)dt^{2}+dx^{2}+a^{2}\\left(x\\right)\\left(dy^{2}+dz^{2}+\\ldots\\right),\\quad b_{x\\mu}=0,\\label{eq:our ansatz}\n\\end{equation}\n\n\\noindent or\n\n\\begin{equation}\ng_{\\mu\\nu}=\\left(\\begin{array}{ccc}\n-a^{2}\\left(x\\right) & 0 & 0\\\\\n0 & 1 & 0\\\\\n0 & 0 & a^{2}\\left(x\\right)\\delta_{ab}\n\\end{array}\\right),\\qquad b_{\\mu\\nu}=\\left(\\begin{array}{ccc}\n0 & 0 & b_{0b}\\left(x\\right)\\\\\n0 & 0 & 0\\\\\nb_{a0}\\left(x\\right) & 0 & b_{ab}\\left(x\\right)\n\\end{array}\\right),\\qquad\\phi=\\phi\\left(x\\right),\n\\label{eq:our ansatz matrix}\n\\end{equation}\n\n\\begin{comment}\n\\begin{equation}\nb_{\\mu\\nu}=\\left(\\begin{array}{cccc}\n0 & 0 & 0 & 0\\\\\n0 & 0 & b_{23}\\left(t\\right) & b_{24}\\left(t\\right)\\\\\n0 & b_{32}\\left(t\\right) & 0 & b_{34}\\left(t\\right)\\\\\n0 & b_{42}\\left(t\\right) & b_{43}\\left(t\\right) & 0\n\\end{array}\\right)\n\\end{equation}\n\n\\begin{equation}\nb_{\\mu\\nu}=\\left(\\begin{array}{cccc}\n0 & 0 & b_{13}\\left(x\\right) & b_{14}\\left(x\\right)\\\\\n0 & 0 & 0 & 0\\\\\nb_{12}\\left(x\\right) & 0 & 0 & b_{34}\\left(x\\right)\\\\\nb_{12}\\left(x\\right) & 0 & b_{43}\\left(x\\right) & 0\n\\end{array}\\right)\n\\end{equation}\n\\end{comment}\n\n\\noindent where $a,b=2,3,\\ldots$ Mimicking the metric of cosmological backgrounds, we choose $b_{x\\mu}=0$ in our ansatz.\nIt turns out that this gauge is crucial to preserving the $O(d,d)$ symmetry. In order to obtain the reduced\naction by using the ansatz (\\ref{eq:our ansatz matrix}), we rotate between the time-like $t$ and the first space-like\n$x$ directions and rewrite the metric and $b_{\\mu\\nu}$ as\n\n\\begin{equation}\ng_{\\mu\\nu}=\\left(\\begin{array}{cc}\n1 & 0\\\\\n0 & G_{ij}\\left(x\\right)\n\\end{array}\\right),\\qquad b_{\\mu\\nu}=\\left(\\begin{array}{cc}\n0 & 0\\\\\n0 & B_{ij}\\left(x\\right)\n\\end{array}\\right),\\label{eq:set up 1}\n\\end{equation}\n\n\\noindent where\n\n\\begin{equation}\nG_{ij}\\left(x\\right)\\equiv\\left(\\begin{array}{cc}\n-a^{2}\\left(x\\right) & 0\\\\\n0 & a^{2}\\left(x\\right)\\delta_{ab}\n\\end{array}\\right),\\qquad B_{ij}\\left(x\\right)\\equiv\\left(\\begin{array}{cc}\n0 & b_{0b}\\left(x\\right)\\\\\nb_{a0}\\left(x\\right) & b_{ab}\\left(x\\right)\n\\end{array}\\right).\\label{eq:set up 2}\n\\end{equation}\n\n\\noindent So $g_{00}\\equiv g_{xx}$, $g_{11}\\equiv g_{tt}$ and $b_{00}\\equiv b_{xx}$, $b_{11}\\equiv b_{tt}$. Henceforth, we will use Eqs. (\\ref{eq:set up 1}) and (\\ref{eq:set up 2}) as the definitions for $g_{\\mu\\nu}$ and $b_{\\mu\\nu}$.\n\nSince we only need to use $G_{ij}$ as a whole to discuss the $O(d,d)$ symmetry and not its components, the time-like minus sign $G_{11}\\left(x\\right)=-a^{2}\\left(x\\right)$ does not show up until we calculate the reduced action. Straightforwardly, the Ricci tensor is\n\n\\begin{eqnarray}\nR_{x}^{\\;\\;x} & = & -\\frac{1}{4}\\mathrm{Tr}\\left(G^{-1}G^{\\prime}\\right)^{2}-\\frac{1}{2}\\mathrm{Tr}\\left(G^{-1}G^{\\prime\\prime}\\right)-\\frac{1}{2}\\mathrm{Tr}\\left(G^{\\prime}{}^{-1}G^{\\prime}\\right),\\nonumber \\\\\nR_{t}^{\\;\\;t} & = & -\\frac{1}{2}\\left(G^{-1}G^{\\prime\\prime}\\right)_{t}^{\\;\\;t}-\\frac{1}{4}\\left(G^{-1}G^{\\prime}\\right)_{t}^{\\;\\;t}\\mathrm{Tr}\\left(G^{-1}G^{\\prime}\\right)+\\frac{1}{2}\\left(G^{-1}G^{\\prime}G^{-1}G^{\\prime}\\right)_{t}^{\\;\\;t},\\nonumber \\\\\nR_{a}^{\\;\\;b} & = & -\\frac{1}{2}\\left(G^{-1}G^{\\prime\\prime}\\right)_{a}^{\\;\\;b}-\\frac{1}{4}\\left(G^{-1}G^{\\prime}\\right)_{a}^{\\;\\;b}\\mathrm{Tr}\\left(G^{-1}G^{\\prime}\\right)+\\frac{1}{2}\\left(G^{-1}G^{\\prime}G^{-1}G^{\\prime}\\right)_{a}^{\\;\\;b},\\label{eq:set up 3}\n\\end{eqnarray}\n\n\\noindent and\n\n\\begin{equation}\nH_{\\mu\\nu\\alpha}H^{\\mu\\nu\\alpha}=3H_{0ij}H^{0ij}=3B_{ij}^{\\prime}\\left(G^{-1}B^{\\prime}G^{-1}\\right)^{ij}=-3\\mathrm{Tr}\\left(G^{-1}B^{\\prime}\\right)^{2}.\\label{eq:set up 4}\n\\end{equation}\n\n\\noindent where we used the notation\n\n\\begin{equation}\nG'^{-1} \\equiv \\frac{d}{dx}(G^{-1}),\\quad\\quad \\mathrm{Tr}\\left(G^{-1}G^{\\prime}\\right)=g^{\\mu\\nu}g_{\\mu\\nu}^{\\prime}.\n\\end{equation}\n\n\\noindent We then introduce the $O\\left(d,d\\right)$-invariant dilaton $\\Phi$, defined as\n\n\\begin{eqnarray}\n\\Phi & \\equiv & 2\\phi-\\frac{1}{2}\\ln\\left|\\det g_{\\mu\\nu}\\right|,\\nonumber \\\\\n\\Phi^{\\prime} & = & 2\\phi^{\\prime}-\\frac{1}{2}\\mathrm{Tr}\\left(G^{-1}G^{\\prime}\\right).\\label{eq:sign 1}\n\\end{eqnarray}\n\n\\noindent Therefore, the action (\\ref{eq:Polyakov}) can be rewritten\nas\n\n\\begin{eqnarray}\n\\bar I_{0} & = & \\int dx\\, e^{-\\Phi}\\left[{\\Phi'}^2+\\frac{1}{4}\\mathrm{Tr}\\left(G^{-1}G^{\\prime}\\right)^{2}-\\frac{1}{2}\\mathrm{Tr}\\left(G^{\\prime}{}^{-1}G^{\\prime}\\right)+\\frac{1}{4}\\mathrm{Tr}\\left(G^{-1}B^{\\prime}\\right)\\right.\\nonumber \\\\\n & & \\left.-\\mathrm{Tr}\\left(G^{-1}G^{\\prime\\prime}\\right)+{\\Phi'}\\mathrm{Tr}\\left(G^{-1}G^{\\prime}\\right)\\right],\n\\end{eqnarray}\n\n\\noindent where we replaced $\\bar I_0$ with $I_0$ to indicate that we are working with the ansatz (\\ref{eq:our ansatz}). Using integration by parts,\n\\begin{equation}\n\\frac{d}{dx}\\left[e^{-\\Phi}\\mathrm{Tr}\\left(G^{-1}G^{\\prime}\\right)\\right]=e^{-\\Phi}\\left[\\mathrm{Tr}\\left(G^{-1}G^{\\prime\\prime}\\right)+\\mathrm{Tr}\\left(G^{\\prime-1}G^{\\prime}\\right)-\\Phi^{\\prime}\\mathrm{Tr}\\left(G^{-1}G^{\\prime}\\right)\\right],\n\\end{equation}\nthe action becomes\n\n\\begin{equation}\n\\bar{I}_{0}=\\int dxe^{-\\Phi}\\left[\\Phi^{\\prime2}-\\frac{1}{4}\\mathrm{Tr}\\left(G^{-1}G^{\\prime}\\right)^{2}+\\frac{1}{4}\\mathrm{Tr}\\left(G^{-1}B^{\\prime}\\right)^{2}\\right].\n\\end{equation}\n\n\\noindent Moreover, we want to point out the sign differences\nbetween our ansatz (\\ref{eq:our ansatz}) and the time-dependent FLRW\nmetric:\n\n\\begin{equation}\n\\mathrm{sign}\\left[R_{xx}\\right]=\\mathrm{sign}\\left[\\tilde{R}_{tt}\\right],\\qquad\\mathrm{sign}\\left[R_{tt}\\right]=-\\mathrm{sign}\\left[\\tilde{R}_{xx}\\right],\\qquad\\mathrm{sign}\\left[R_{ab}\\right]=-\\mathrm{sign}\\left[\\tilde{R}_{ab}\\right].\\label{eq:sign 2}\n\\end{equation}\n\n\\begin{equation}\n\\mathrm{sign}\\left[H^{2}\\right]=-\\mathrm{sign}\\left[\\tilde{H}^{2}\\right],\\qquad\\mathrm{sign}\\left[H_{\\mu\\nu}^{2}\\right]=-\\mathrm{sign}\\left[\\tilde{H}_{\\mu\\nu}^{2}\\right],\\qquad\\mathrm{sign}\\left[\\left(\\partial\\phi\\right)^{2}\\right]=-\\mathrm{sign}\\left[\\left(\\partial\\tilde{\\phi}\\right)^{2}\\right],\\label{eq:sign 3}\n\\end{equation}\n\n\\noindent where $\\tilde{A}$ represents the quantities calculated\nin the time-dependent FLRW background. For the Ricci scalar,\nwe have\n\n\\begin{equation}\n\\mathrm{sign}\\left[R\\right]=-\\mathrm{sign}\\left[\\tilde{R}\\right].\\label{eq:sign 4}\n\\end{equation}\n\n\\noindent Finally, the tree-level action (\\ref{eq:Polyakov}) becomes\n\\begin{equation}\n\\bar{I}_{0}=\\int dxe^{-\\Phi}\\left[\\Phi^{\\prime2}+\\frac{1}{8}\\mathrm{Tr}\\left(M^{\\prime}\\eta\\right)^{2}\\right].\\label{eq:0th action}\n\\end{equation}\n\n\\noindent where\n\n\\begin{equation}\nM=\\left(\\begin{array}{cc}\nG^{-1} & -G^{-1}B\\\\\nBG^{-1} & G-BG^{-1}B\n\\end{array}\\right),\\qquad\\eta=\\left(\\begin{array}{cc}\n0 & I\\\\\nI & 0\n\\end{array}\\right).\n\\end{equation}\n\n\\noindent Since $\\mathrm{Tr}\\left(M^{\\prime}\\eta\\right)^{2}=\\mathrm{Tr}\\left(M^{\\prime}M^{\\prime}{}^{-1}\\right)$, it is easy to see that the tree-level action (\\ref{eq:0th action}) is invariant under $O\\left(d,d,R\\right)$\ntransformations,\n\n\\begin{equation}\n\\Phi\\rightarrow\\Phi,\\quad M\\rightarrow\\tilde{M}=\\Omega^{T}M\\Omega,\n\\end{equation}\n\n\\noindent where $\\Omega$ satisfies $\\Omega^{T}\\eta\\Omega=\\eta$. Considering the gravitational background with $B_{ij}=0$ and a global\ntransformation by $\\eta\\in O\\left(d,d,R\\right)$, we have\n\n\\begin{equation}\nM=\\left(\\begin{array}{cc}\nG^{-1} & 0\\\\\n0 & G\n\\end{array}\\right)\\rightarrow\\tilde{M}=\\eta M\\eta=\\left(\\begin{array}{cc}\nG & 0\\\\\n0 & G^{-1}\n\\end{array}\\right).\n\\end{equation}\n\n\\noindent This is the space-dependent duality corresponding to the scale-factor duality in the time-dependent FLRW background.\n\n\\subsection*{First-order correction of $\\alpha^{\\prime}$}\n\nWe now demonstrate that the closed string spacetime action with the first-order $\\alpha^{\\prime}$ correction also possesses the standard $O(d,d)$ symmetry for our ansatz (\\ref{eq:our ansatz}). The action with the first-order $\\alpha^{\\prime}$ correction and vanishing Kalb-Ramond field is\n\n\\begin{equation}\nI=\\int d^{D}x\\sqrt{-g}e^{-2\\phi}\\left[R+4\\left(\\partial_{\\mu}\\phi\\right)^{2}-\\alpha^{\\prime}\\lambda_{0} R_{\\mu\\nu\\sigma\\rho}R^{\\mu\\nu\\sigma\\rho}+\\mathcal{O}\\left(\\alpha^{\\prime2}\\right)\\right].\n\\end{equation}\n\n\\noindent With some $\\alpha'$-corrected field redefinitions as in \\cite{Meissner:1996sa}, it can be expressed with first-order derivatives as\n\n\n\\begin{eqnarray}\nI & &= \\int d^{D}x\\sqrt{-g}e^{-2\\phi}\\left[R+4\\left(\\partial_{\\mu}\\phi\\right)^{2}\\right]\\nonumber \\\\\n & & -\\alpha^{\\prime}\\lambda_{0}\\int d^{D}x\\sqrt{-g}e^{-2\\phi}\\left[-R_{GB}^{2}+16\\left(R^{\\mu\\nu}-\\frac{1}{2}g^{\\mu\\nu}R\\right)\\partial_{\\mu}\\phi\\partial_{\\nu}\\phi-16\\partial^{2}\\phi \\left(\\partial\\phi\\right)^{2}+16\\left(\\partial\\phi\\right)^{4}\\right] +\\mathcal{O} \\left(\\alpha^{\\prime2}\\right),\n\\end{eqnarray}\n\n\\noindent where $R_{GB}^{2}$ is the Gauss-Bonnet term:\n\n\\begin{equation}\nR_{GB}^{2}=R_{\\mu\\nu\\sigma\\rho}R^{\\mu\\nu\\sigma\\rho}-4R_{\\mu\\nu}R^{\\mu\\nu}+R^{2}.\n\\end{equation}\n\n\\noindent By using our ansatz (\\ref{eq:our ansatz}), as one can check directly, the action turns out to be\n\n\\begin{equation}\n\\bar{I}=-\\int dxe^{-\\Phi}\\left\\{ -\\Phi^{\\prime2}-\\frac{1}{8}\\mathrm{Tr}\\mathcal{M}^{\\prime2}+\\alpha^{\\prime}\\lambda_{0}\\left[\\frac{1}{16}\\mathrm{Tr}\\mathcal{M}^{\\prime4}-\\frac{1}{64}\\left(\\mathrm{Tr}\\mathcal{M}^{\\prime2}\\right)^{2}-\\frac{1}{4}\\Phi^{\\prime2}\\mathrm{Tr}\\mathcal{M}^{\\prime2}-\\frac{1}{3}\\Phi^{\\prime4}\\right]\\right\\}+\\mathcal{O} \\left(\\alpha^{\\prime2}\\right),\n\\label{eq:1st order O(d,d)}\n\\end{equation}\n\n\\noindent with\n\n\\begin{equation}\n\\mathcal{M}\\equiv M\\eta=\\left(\\begin{array}{cc}\n0 & G^{-1}\\\\\nG & 0\n\\end{array}\\right).\n\\end{equation}\n\n\n\n\n\\noindent Now, let us introduce the Kalb-Ramond field to the action\nwith the first-order $\\alpha^{\\prime}$ correction, which is\ngiven by\n\n\\begin{eqnarray}\nI & = & \\int d^{D}x\\sqrt{-g}e^{-2\\phi}\\left[R+4\\left(\\partial_{\\mu}\\phi\\right)^{2}-\\frac{1}{12}H^{2}\\right]\\nonumber \\\\\n & & -\\alpha^{\\prime}\\lambda_{0}\\int d^{D}x\\sqrt{-g} e^{-2\\phi}\\left[-R_{GB}^{2}+16\\left(R^{\\mu\\nu}-\\frac{1}{2}g^{\\mu\\nu}R\\right)\\partial_{\\mu}\\phi\\partial_{\\nu} \\phi-16\\partial^{2}\\phi\\left(\\partial\\phi\\right)^{2}+16\\left(\\partial\\phi\\right)^{4}\\right.,\\nonumber \\\\\n & & +\\frac{1}{2}\\left(R^{\\mu\\nu\\sigma\\rho}H_{\\mu\\nu\\alpha} H_{\\sigma\\rho}^{\\quad\\alpha}-2R^{\\mu\\nu}H_{\\mu\\nu}^{2}+\\frac{1}{3} RH^{2}\\right)-2\\left(D^{\\mu}\\partial^{\\nu}\\phi H_{\\mu\\nu}^{2}-\\frac{1}{3}\\partial^{2}\\phi H^{2}\\right)\\nonumber \\\\\n & & \\left.-\\frac{2}{3}H^{2}\\left(\\partial\\phi\\right)^{2} - \\frac{1}{24}H_{\\mu\\nu\\lambda}H_{\\quad\\rho\\alpha}^{\\nu} H^{\\rho\\sigma\\lambda}H_{\\sigma}^{\\quad\\mu\\alpha}+\\frac{1}{8}H_{\\mu\\nu}^{2}H^{2\\mu\\nu}-\\frac{1}{144}\\left(H^{2}\\right)^{2}\\right]+\\mathcal{O} \\left(\\alpha^{\\prime2}\\right).\n\\end{eqnarray}\n\n\\noindent Using Eqs. (\\ref{eq:set up 1}), (\\ref{eq:set up 2}),\n(\\ref{eq:set up 3}), (\\ref{eq:set up 4}) and (\\ref{eq:sign 1}), the action above can be expressed in the $O\\left(d,d\\right)$-invariant form (\\ref{eq:1st order O(d,d)}), but with\n\n\\begin{equation}\n\\mathcal{M}=\\left(\\begin{array}{cc}\nBG^{-1} & G-BG^{-1}B\\\\\nG^{-1} & -G^{-1}B\n\\end{array}\\right),\n\\end{equation}\nas one can verify by applying the EOM of $\\mathcal{M}$ and $\\phi$.\n\n\\noindent The action (\\ref{eq:1st order O(d,d)}) can be further simplified by using the tree-level EOM of $\\Phi$ from Eq. (\\ref{eq:0th action}), which is\n\n\\begin{equation}\n\\Phi^{\\prime2}+\\frac{1}{8}\\mathrm{Tr}\\left(M^{\\prime}\\eta\\right)^{2}=0\\rightarrow\\Phi^{\\prime2}=-\\frac{1}{8}\\mathrm{Tr}\\mathcal{M}^{\\prime2}.\\label{eq:redefine}\n\\end{equation}\n\n\\noindent Then the action is reduced to\n\n\\begin{equation}\n\\bar{I}=-\\int dxe^{-\\Phi}\\left\\{ -\\Phi^{\\prime2}-\\frac{1}{8}\\mathrm{Tr}\\mathcal{M}^{\\prime2}+\\alpha^{\\prime}\\lambda_{0}\\left[\\frac{1}{16}\\mathrm{Tr}\\mathcal{M}^{\\prime4}+\\frac{1}{96}\\left(\\mathrm{Tr}\\mathcal{M}^{\\prime2}\\right)^{2}\\right]\\right\\}.\n\\end{equation}\n\n\\noindent This action manifests the invariance under the $O\\left(d,d,R\\right)$\ntransformation\n\\begin{equation}\n\\Phi\\rightarrow\\Phi,\\quad M\\rightarrow\\tilde{M}=\\Omega^{T}M\\Omega.\n\\end{equation}\n\n\\subsection*{Higher-order corrections of $\\alpha^{\\prime}$}\nFor our ansatz, we have shown that the zeroth order and the first\norder in $\\alpha^{\\prime}$ can be rewritten in a standard $O\\left(d,d\\right)$-invariant form. Following the same logic as Refs. \\cite{Hohm:2019ccp,Hohm:2019jgu}, it is reasonable to assume that this is also true for all orders in $\\alpha'$.\n\nFollowing the derivations in Ref. \\cite{Hohm:2019jgu}, we now show that for our ansatz the action can be put into the reduced of (\\ref{eq: Odd with alpha x}). From the definition $\\mathcal{M}$, it is easy to get\n\n\n\\begin{equation}\n\\mathrm{Tr}\\mathcal{M}=\\mathrm{Tr}\\mathcal{M}^{\\prime}=\\mathrm{Tr}\\mathcal{M}^{\\prime\\prime}=0.\n\\end{equation}\n\n\\noindent Moreover, $\\mathcal{M}\\mathcal{M}=1$ leads to $\\mathcal{M}\\mathcal{M}^{\\prime}+\\mathcal{M}^{\\prime}\\mathcal{M}=0$ and then\n\n\n\\begin{equation}\n2\\mathcal{M}^{\\prime}\\mathcal{M}^{\\prime}+\\mathcal{M}^{\\prime\\prime}\\mathcal{M}+ \\mathcal{M}\\mathcal{M}^{\\prime\\prime}=0\\quad{\\rm and} \\quad \\mathcal{M}\\mathcal{M}^{\\prime2k+1}=-\\mathcal{M}^{\\prime2k+1}\\mathcal{M}.\n\\end{equation}\n\n\n\n\\noindent Multiplying by $\\left(\\mathcal{M}^{\\prime}\\right)^{2k+1}$ and taking traces, one finds\n\n\n\\begin{equation}\n\\mathrm{Tr}\\left(\\mathcal{M}^{\\prime2k+1}\\right)=0,\\qquad k=0,1,\\ldots.\n\\end{equation}\n\n\\noindent Second, by using the equations of motion of the action\n(\\ref{eq:0th action}), higher space-dependent derivatives of $\\mathcal{M}$\ncan be written in the terms of $\\mathcal{M}$ and $\\mathcal{M}^{\\prime}$.\nTherefore, higher-order corrections of $\\alpha^{\\prime}$ can be\nbuilt from $\\mathcal{M}$ and $\\mathcal{M}^{\\prime}$. Third, if the\nterms of the higher-order $\\alpha^{\\prime}$ corrections take the form\n$\\mathrm{Tr}\\left(\\mathcal{M}^{m}\\mathcal{M}^{\\prime k}\\right)$,\nby using $\\mathcal{M}\\mathcal{M}=1$ we get\n\n\\begin{equation}\n\\mathrm{Tr}\\left(\\mathcal{M}\\mathcal{M}^{\\prime k}\\right)=-\\mathrm{Tr}\\left(\\mathcal{M}\\mathcal{M}^{\\prime k}\\right)=0.\n\\end{equation}\n\n\\noindent Finally, due to Eq. (\\ref{eq:redefine}),\nthe dilation could be replaced by $\\mathrm{Tr}\\mathcal{M}^{\\prime2}$.\nIn summary, the higher-order corrections of $\\alpha^{\\prime}$ are\nconstructed using $\\mathcal{M}^{\\prime2}$. For example,\n\n\\begin{eqnarray}\n\\mathcal{O}\\left(\\alpha^{\\prime}\\right): & & a_{1}\\mathrm{Tr}\\mathcal{M}^{\\prime4}+a_{2}\\left(\\mathrm{Tr}\\mathcal{M}^{\\prime2}\\right)^{2},\\nonumber \\\\\n\\mathcal{O}\\left(\\alpha^{\\prime2}\\right): & & b_{1}\\mathrm{Tr}\\mathcal{M}^{\\prime6}+b_{2}\\mathrm{Tr}\\mathcal{M}^{\\prime4}\\mathrm{Tr}\\mathcal{M}^{\\prime2}+b_{3}\\left(\\mathrm{Tr}\\mathcal{M}^{\\prime2}\\right)^{3},\\nonumber \\\\\n & \\vdots\\nonumber \\\\\n\\mathcal{O}\\left(\\alpha^{\\prime k-1}\\right): & & d_{1}\\mathrm{Tr}\\mathcal{M}^{\\prime2k}+d_{2}\\mathrm{Tr}\\mathcal{M}^{\\prime2k-2}\\mathrm{Tr}\\mathcal{M}^{\\prime2}+d_{3}\\mathrm{Tr}\\mathcal{M}^{\\prime2k-4}\\left(\\mathrm{Tr}\\mathcal{M}^{\\prime2}\\right)^{2}+\\ldots.\n\\end{eqnarray}\n\n\\noindent Furthermore, considering the action at zeroth order Eq.(\\ref{eq:0th action}),\nthe variation for $g_{xx}$ gives\n\n\\begin{equation}\n\\delta\\bar{I}_{0}=\\int dxe^{-\\Phi}\\left[\\Phi^{\\prime2}+\\frac{1}{8}\\mathrm{Tr}\\left(\\mathcal{M}^{\\prime}\\right)^{2}\\right]\\delta g_{xx}.\\label{eq:delta 1}\n\\end{equation}\n\n\\noindent This variation can be generalized to the higher orders with\n$X_{2k}\\left(\\mathcal{M}^{\\prime}\\right)=\\mathrm{Tr}\\left[\\left(\\mathcal{M}^{\\prime}\\right)^{2k_{1}}\\right]\\cdots\\mathrm{Tr}\\left[\\left(\\mathcal{M}^{\\prime}\\right)^{2k_{n}}\\right]$,\n$k=k_{1}+k_{n}$:\n\n\\begin{equation}\n\\delta\\bar{I}_{k}=\\frac{\\beta k}{2\\left(4k-1\\right)}\\int dxe^{-\\Phi}X_{2k}\\left(\\mathcal{M}^{\\prime}\\right)\\mathrm{Tr}\\left(\\mathcal{M}^{\\prime}\\right)^{2},\\label{eq:delta 2}\n\\end{equation}\n\n\\noindent where\n\n\\begin{equation}\n\\delta g_{xx}=\\beta\\alpha^{\\prime k}X_{2k}\\left(\\mathcal{M}^{\\prime}\\right).\\label{eq: rede gxx}\n\\end{equation}\n\n\\noindent If we substitute the redefinition (\\ref{eq: rede gxx})\nback into Eq. (\\ref{eq:delta 1}) and set $\\frac{\\beta k}{2\\left(4k-1\\right)}=-1$,\nwe find that the terms with $\\mathrm{Tr}\\mathcal{M}^{\\prime2}$ can\nbe eliminated when we sum Eqs. (\\ref{eq:delta 1}) and (\\ref{eq:delta 2}).\nIn other words, we can safely set $\\mathrm{Tr}\\mathcal{M}^{\\prime2}=0$ for $\\alpha'$-corrected terms in the action\nand obtain\n\n\\begin{eqnarray}\n\\mathcal{O}\\left(\\alpha^{\\prime}\\right): & & a_{1}\\mathrm{Tr}\\mathcal{M}^{\\prime4},\\nonumber \\\\\n\\mathcal{O}\\left(\\alpha^{\\prime2}\\right): & & b_{1}\\mathrm{Tr}\\mathcal{M}^{\\prime6},\\nonumber \\\\\n & \\vdots\\nonumber \\\\\n\\mathcal{O}\\left(\\alpha^{\\prime k-1}\\right): & & d_{1}\\mathrm{Tr}\\mathcal{M}^{\\prime2k}+d_{4}\\mathrm{Tr}\\mathcal{M}^{\\prime2k-4}\\mathrm{Tr}\\mathcal{M}^{\\prime4}\\ldots.\n\\end{eqnarray}\n\n\\noindent The action with higher-order $\\alpha^{\\prime}$ corrections\nthen reduces to\n\n\\begin{equation}\n\\bar{I}\\equiv-\\int dxe^{-\\Phi}\\left(-\\Phi^{\\prime2}+{\\sum_{k=1}^{\\infty}}\\left(\\alpha^{\\prime}\\right)^{k-1}\\bar{c}_{k}\\mathrm{Tr}\\left(\\mathcal{M}^{\\prime2k}\\right)+\\mathrm{multitraces}\\right).\n\\end{equation}\n\n\\noindent After extracting the overall minus sign of the action above,\nthe even orders of $\\alpha^{\\prime}$ corrections acquire a\nminus sign. Since $\\bar{c}_{k}$ is the coefficient of $\\mathrm{Tr}\\mathcal{M}^{\\prime2k}$,\nwhich is not modified by $\\Phi^{\\prime k}\\simeq\\frac{1}{8}\\left(\\frac{k-1}{k-3}\\right)\\Phi^{\\prime k-2}\\mathrm{Tr}\\mathcal{M}^{\\prime2}$,\nthe values of $\\left|c_{k}\\right|$ and $\\left|\\bar{c}_{k}\\right|$\nare the same. Moreover,\nby using Eqs. (\\ref{eq:sign 1}), (\\ref{eq:sign 2}), and (\\ref{eq:sign 3})\nto all orders, we find $\\bar{c}_{1}=c_{1}=-\\frac{1}{8}$, $\\bar{c}_{2k+1}=c_{2k+1}$, and\n$\\bar{c}_{2k}=-c_{2k}$.\nIt is worth noting that the relationships between $\\bar{c}_{k}$ and $c_{k}$ are not changed after including the contributions of the multitrace terms.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec1}\n\nQueueing models with many-servers are prevalent in modeling call\ncenters and other large-scale service systems. They are used for\noptimizing staffing and making dynamic control decisions. The\ncomplexity of the underlying queueing model renders such optimization\nproblems intractable for exact analysis, and one needs to resort to\napproximations. A prominent mode of approximate analysis is to study\nsuch systems in the so-called Halfin--Whitt (HW) heavy-traffic regime;\ncf. \\cite{HaW81}. Roughly speaking, the analysis of a queueing system\nin the HW regime proceeds by scaling up the number of servers and the\narrival rate of customers in such a way that the system load approaches\none asymptotically. To be more specific, instead of considering a\nsingle system, one considers a sequence of (closely related) queueing\nsystems indexed by a parameter $n$ along which the arrival rates and\nthe number of servers scale up so that the system traffic intensity~$\\rho^n$\nsatisfies\n\\begin{equation}\\label{eqHTcond}\n\\sqrt{n} (1-\\rho^n)\\rightarrow\\beta\\qquad\\mbox{as } n\\tinf.\n\\end{equation}\n\nIn the context of dynamic control, passing to a formal limit of the\n(properly scaled) system dynamics equations as $n\\tinf$ gives rise to\na \\textit{limit} diffusion control problem, which is often more tractable\nthan the original dynamic control problem it approximates.\nThe approximating diffusion control problem typically provides useful\nstructural insights and guides the design of good policies for the\noriginal system. Once a candidate policy is proposed for the original\nproblem of interest, its asymptotic performance can be studied in the\nHW regime. The ultimate goal is to establish that the proposed policy\nperforms well. To this end, a useful criterion is the notion of\nasymptotic optimality, which provides assurance that the optimality gap\nassociated with the proposed policy vanishes asymptotically \\textit{under\ndiffusion scaling} as $n\\tinf$. Hence, asymptotic optimality in this\ncontext is equivalent to showing that the optimality gap is $o(\\sqrt{n})$.\n\nA central reference for our purposes is the recent paper by Atar,\nMandelbaum and Reiman \\cite{AMR02}, where the authors apply all steps\nof the above scheme to the important class of problems of dynamically\nscheduling a multiclass queue with many identical servers in the HW\nregime. Specifically, \\cite{AMR02} considers a sequence of systems\nindexed by the number of servers $n$, where the number of servers and\nthe arrival rates of the various customer classes increase with $n$\nsuch that the heavy-traffic condition holds; cf. equation (\\ref\n{eqHTcond}). Following the scheme described above, the authors derive\nan approximate diffusion control problem through a formal limiting\nargument. They then show that the diffusion control problem admits an\noptimal Markov policy, and that the corresponding HJB equation (a\nsemilinear elliptic PDE) has a unique classical solution. Using the\nMarkov control policy and the HJB equation, the authors propose\nscheduling control policies for the original (sequence of) queueing\nsystems of interest. Finally, they prove that the proposed sequence of\npolicies is asymptotically optimal under diffusion scaling. Namely, the\noptimality gap of the proposed policy for the $n$th system is\n$o(\\sqrt{n})$. A similar approach is applied to more general networks\nin \\cite{atar2005scheduling}. In this paper, we study a similar\nqueueing system (see Section \\ref{secmodel}). Our goal, however, is\nto provide an improved optimality gap which, in turn, requires a\nsubstantially different scheme than the one alluded to above.\n\nApproximations in the HW regime for performance analysis have been used\nextensively for the study of fixed policies. Given a particular policy,\nit may often be difficult to calculate various performance measures in\nthe original queueing system. Fortunately, the corresponding\napproximations in the HW regime are often more tractable. The machinery\nof strong approximations (cf. Cs\\\"{o}rgo and Horv\\'{a}th \\cite{csorgo})\noften plays a central role in such analysis. In the context\nof many-server heavy-traffic analysis, with strong approximations, the\narrival and service processes (under suitable assumptions on the\ninter-arrival and service times) can be approximated by a diffusion\nprocess so that the approximation error on finite intervals is $O(\\log\nn)$ (where $n$ is the number of servers as before). Therefore, it is\nnatural to expect that, under a given policy, the error in the\ndiffusion approximations of the various performance metrics is $O(\\log\nn)$, which is indeed verified for various settings in the literature\n(see, e.g., \\cite{MMR98}).\n\nA natural question is then whether one can go beyond the analysis of\nfixed policies and achieve an optimality gap that is logarithmic in $n$\nalso under dynamic control, improving upon the usual optimality gap of\n$o(\\sqrt{n})$. More specifically, can one propose a sequence of\npolicies (one for each system in the sequence) where the optimality gap\nfor the policy (associated with the $n$th system) is logarithmic\nin~$n$? While one hopes to get logarithmic optimality gaps as suggested by\nstrong approximations, it is not a priori clear if this can be achieved\nunder dynamic control. The purpose of this paper is to provide a\nresolution to this question. Namely, we study whether one can establish\nsuch a strong notion of asymptotic optimality and if so, then how\nshould one go about constructing policies which are asymptotically\noptimal in this stronger sense.\n\nOur results show that such strengthened bounds on optimality gaps can\nbe attained. Specifically, we construct a sequence of asymptotically\noptimal policies, where the optimality gap is logarithmic in $n$. Our\nanalysis reveals that identifying (a sequence of) candidate policies\nrequires a new approach. To be specific, we advance a sequence of\ndiffusion control problems (as opposed to just one) where the diffusion\ncoefficient in each system depends on the state and the control. This\nis contrary to the existing work on the asymptotic analysis of queueing\nsystems in the HW regime. In that stream of literature, the diffusion\ncoefficient is typically a (deterministic) constant. Indeed, Borkar\n\\cite{borkar2005controlled} views the constant diffusion coefficient\nas a characterizing feature of the problems stemming from the\nheavy-traffic approximations in the HW regime. Interestingly, it is\nessential in our work to have the diffusion coefficient depend on the\nstate and the control for achieving the logarithmic optimality gap. In\nessence, incorporating the impact of control on the diffusion\ncoefficient allows us to track the policy performance in a more refined manner.\n\nWhile the novelty of having the diffusion coefficient depend on the\ncontrol facilitates better system performance, it also leads to a more\ncomplex diffusion control problem. In particular, the associated HJB\nequation is fully nonlinear; it is also nonsmooth under a linear\nholding cost structure. In what follows, we show that each of the HJB\nequations in the sequence has a unique smooth solution on bounded\ndomains and that each of the diffusion control problems (when\nconsidered up to a stopping time) admits an optimal Markov control\npolicy. Interpreting this solution appropriately in the context of the\noriginal problem gives rise to a policy under which the optimality gap\nis logarithmic in $n$. As in the performance analysis of fixed\npolicies,\\vadjust{\\goodbreak} strong approximations will be used in the last step, where we\npropose a sequence of controls for the original queueing systems, and\nshow that we achieve the desired performance. However, it is important\nto note that strong approximation results alone are not sufficient for\nour results. Rather, for the improved optimality gaps we need the\nrefined properties of the solutions to the HJB equations. Specifically,\ngradient estimates for the sequence of solutions to the HJB equations\n(cf. Theorem \\ref{thmHJB1sol}) play a central role in our proofs.\n\nOur analysis restricts attention to a linear holding cost structure.\nHowever, we expect the analysis to go through for some other cost\nstructures including convex holding costs. Indeed, the analysis of the\nconvex holding cost case will probably be simpler as one tends to get\n``interior'' solutions in that case as opposed to the corner solutions\nin the linear cost case, which causes nonsmoothness. One could also\nenrich the model by allowing abandonment. We expect the analysis to go\nthrough with no major changes in these cases as well; see the\ndiscussion of possible extensions in Section \\ref{secconclusions}.\nFor purposes of clarity, however, we chose not to incorporate these\nadditional\/alternative features because we feel that the current set-up\nenables us to focus on and clearly communicate the main idea: the use\nof a novel Brownian model with state\/control dependent diffusion\ncoefficient to obtain improved optimality gaps.\n\n\\subsection*{Organization of the paper} Section \\ref{secmodel}\nformulates the model and states the main result. Section \\ref\n{secresult} introduces a (sequence of) Brownian control problem(s),\nwhich are then analyzed in Section \\ref{secADCP}. A performance\nanalysis of our proposed policy appears in Section \\ref{sectracking}.\nThe major building blocks of the proof are combined to establish the\nmain result in Section \\ref{seccombining} and some concluding remarks\nappear in Section~\\ref{secconclusions}.\n\n\\section{Problem formulation}\\label{secmodel}\n\nWe consider a queueing system with a single server-pool consisting of\n$n$ identical servers (indexed from 1 to $n$) and a set $\\I=\\{1,\\ldots\n,I\\}$ of job classes as depicted in Figure \\ref{figv}. Jobs of\n\\begin{figure}\n\n\\includegraphics{777f01.eps}\n\n\\caption{A multiclass queue with many servers.}\n\\label{figv}\n\\end{figure}\nclass-$i$ arrive according to a Poisson process with rate $\\lambda_i$\nand wait in their designated queue until their service begins. Once\nadmitted to service, the service time of a class-$i$ job is distributed\nas an exponential random variable with rate $\\mu_i>0$. All service and\ninterarrival times are mutually independent.\n\n\\subsection*{Heavy-traffic scaling}\n\nWe consider a sequence of systems indexed by the number of servers $n$.\nThe superscript $n$ will be attached to various processes and\nparameters to make the dependence on $n$ explicit. (It will be omitted\nfrom parameters and other quantities that do not change with $n$.) We\nassume\\vspace*{1pt} that $\\lambda_i^n=a_i\\lambda^n$ for all~$n$, where $\\lambda\n^n$ is the total arrival rate and $a_i>0$ for $i\\in\\I$ with $\\sum\n_{i}a_i=1$. This assumption is made for simplicity of notation and\npresentation. Nothing changes\\vspace*{1pt} in our results if one assumes, instead,\nthat $\\lambda_i^n\/n\\rightarrow\\lambda_i$ and $\\sqrt{n}(\\lambda\n_i^n\/n-\\lambda_i)\\rightarrow\\hat{\\lambda}_i$ as $n\\tinf$ where\n$\\lambda_i\/\\sum_{k\\in\\I}\\lambda_k=a_i>0$.\\vadjust{\\goodbreak}\n\nThe nominal load in the $n$th system is then given by\n\\[\nR^n=\\sum_{i}\\frac{\\lambda_i^n}{\\mu_i}=\\lambda^n\\sum_{i}\\frac\n{a_i}{\\mu_i},\n\\]\nso that defining $\\bar{\\mu}=[\\sum_{i}a_i\/\\mu_i]^{-1}$ we have that\n$R^n=\\lambda^n\/\\bar{\\mu}$, which corresponds to the nominal number\nof servers required to handle all the incoming jobs. The heavy-traffic\nregime is then imposed by requiring that the number of servers deviates\nfrom the nominal load by a term that is a square root of the nominal\nload. Formally, we impose this by assuming that $\\lambda^n$ is such\nthat\n\\begin{equation} \\label{eqHTstaff}\nn= R^n+\\beta\\sqrt{R^n}\n\\end{equation}\nfor some\n$\\beta\\in(-\\infty,\\infty)$ that does not scale with $n$. Also, we\ndefine the relative load imposed on the system by class-$i$ jobs,\ndenoted by $\\nu_{i}$, as follows:\n\\begin{equation}\\label{eqrhodefin}\n\\nu_i =\\frac{a_i\/\\mu_i}{\\sum\n_{k\\in\\I}a_k\/\\mu_k}.\n\\end{equation}\nNote that $\\sum_{i\\in\\I}\\nu_i=1$, and $\\nu_i n$ can be interpreted\nas a first-order (fluid) estimate for the number of servers busy\nserving class-$i$ customers.\n\n\\subsection{System dynamics}\n\nLet $Q_i\\lam(t)$ and $X_i\\lam(t)$ denote the number of class-$i$ jobs\nin the queue and in the system, respectively, at time $t$ in the $n$th\nsystem. Similarly, let $Z_i\\lam(t)$ denote the number of servers\nworking on class-$i$ jobs at time $t$. Clearly, for all $i, n,\nt$, the following holds:\n\\[\nX_i\\lam(t)=Z_i\\lam(t)+Q_i\\lam(t).\\vadjust{\\goodbreak}\n\\]\n\nIn our setting, a control corresponds to determining how many of the\n\\mbox{class-$i$} jobs currently in the system are placed in queue and in\nservice for $i\\in\\I$. We take the process $Z^n$ as our control in the\n$n$th system. Note that one can equivalently take the queue length\nprocess $Q^n$ as the control. (The knowledge of either process is\nsufficient to pin down the evolution of the system given the arrival,\nservice processes and the initial conditions.) Clearly, the control\nprocess must satisfy certain requirements for admissibility, including\nthe usual nonanticipativity requirement. We defer a precise\nmathematical definition of admissible controls for now (see Definition\n\\ref{definadmissiblecontrols}). However, it should be clear that,\ngiven the process $Z^n$, one can construct the other processes of interest.\n\nTo be specific, consider a complete probability space $(\\Omega\n,\\mathcal{F},\\mathbb{P})$ and $2I$ mutually independent\n\\textit{unit-rate} Poisson processes $(\\mathcal{N}_i^a(\\cdot), \\mathcal\n{N}_i^d(\\cdot), i\\in\\I)$ on that space. Given the \\textit{primitives}\n$(\\mathcal{N}_i^d(\\cdot),\\mathcal{N}_i^a(\\cdot),X_i\\lam(0),Z_i\\lam\n(0);i\\in\\I)$ and the control process~$Z^n$, we construct the\nprocesses $X^n, Q^n$ as follows: for $t \\geq0$ and $i \\in\\mathcal{I}$\n\\begin{eqnarray} \\label{eqdynamics1}\nX_i\\lam(t)&=&X_i\\lam(0)+\\mN_i^a(\\lambda_i^n t)-\\mN_i^d\\biggl( \\mu\n_i\\int_0^t Z_i\\lam(s)\\,ds \\biggr),\\\\\nQ_i\\lam(t)&=&X_i\\lam(t)-Z_i\\lam(t).\n\\end{eqnarray}\nThe processes $Z^n, Q^n, X^n$ must jointly satisfy the constraints\n\\begin{equation}\\label{eqnon-negativity}\n(Q\\lam(t), X\\lam(t),Z\\lam(t))\\in\\mathbb{Z}_+^{3I},\\qquad\ne\\cdot Z\\lam(t)\\leq n,\n\\end{equation}\nwhere $e$ is the $I$-dimensional vector of ones.\n\nControls can be preemptive or nonpreemptive. Under a nonpreemptive\ncontrol, a job that is assigned to a server keeps the server busy until\nits service is completed. In particular, given a nonpreemptive control\n$Z^n$, the process $Z_i^n$ can decrease only through service\ncompletions of class-$i$ jobs. In contrast, the class of preemptive\ncontrols is broader. While it includes nonpreemptive policies, it also\nincludes controls that (occasionally) may preempt a job's service. The\npreempted job is put back in the queue and its service is resumed at a\nlater time (possibly by a different server). Hence, the class of\npreemptive controls subsumes the class of nonpreemptive ones (which is\nalso immediate from Definition 1 in \\cite{AMR02}) and the cost of an\noptimal policy among preemptive ones gives a lower bound for that among\nthe nonpreemptive ones.\n\nIn what follows, we will largely focus on preemptive controls, which\nare easier to work with, and derive a specific policy which is near\noptimal in that class. The specific policy we derive is, however,\nnonpreemptive, and therefore, is near optimal among the nonpreemptive\npolicies as well. More specifically, the policy we propose belongs to a\nclass which we refer to as \\textit{tracking policies.}\n\nTo facilitate the definition of tracking policies, define $\\mathcal{U}\n\\subset\\mathbb{R}_+^I$ as\n\\begin{equation}\\label{eqmUdefin}\n\\mathcal{U}=\\biggl\\{u\\in\\bbR_+^I\\dvtx\n\\sum_{i}u_i=1\\biggr\\}.\n\\end{equation}\nAlso, for all $i$ and $t\n\\geq0$, let\n\\begin{equation} \\label{eqtildeXdefin}\n\\check{X}_i\\lam(t)=X_i\\lam(t)-\\nu_in.\n\\end{equation}\nHence, the process $\\check{X}_i^n$ captures the\noscillations of the process $X_i\\lam$ around its ``fluid''\napproximation $\\nu_i n$. Throughout our analysis, for $x\\in\\bbR$ we\nlet $(x)^+=\\max\\{0,x\\}$ and $(x)^-=\\max\\{0,-x\\}$.\n\\begin{defin}\\label{defintracking}\nGiven a function $h\\dvtx\\bbR^I\\to\\mathcal{U}$, an $h$-tracking policy\nmakes resource allocation decisions in the $n$th system as follows:\n\\begin{longlist}\n\\item It is nonpreemptive. That is, once a server starts\nworking on a job, it continues without interruption until that job's\nservice is completed.\n\\item It is work conserving. That is, the number of busy\nservers satisfies $e\\cdot Z^n(t)=(e\\cdot X^n(t))\\wedge n$ for all $t>\n0$. In particular, no server is idle as long as there are $n$ or more\njobs in the system.\n\\item When a class-$i$ job arrives to the system it joins the\nqueue of class $i$ if all servers are busy processing other jobs.\nOtherwise, the lowest-indexed idle server starts working on that job.\n\\item A server that finishes processing a job at a time $t$,\nidles if all queues are empty. Otherwise, she starts working on a job\nof class $i\\in\\mathcal{K}(t-)$ with probability $\\lambda_i^n\/\\sum\n_{k\\in\\mathcal{K}(t-)}\\lambda_k^n$, where, for $t>0$, the set\n$\\mathcal{K}(t-)$ is defined by\n\\begin{equation}\\label{eqmathKdefin}\n\\mathcal{K}(t-)=\\bigl\\{k\\in\\I\\dvtx Q_k(t)-h_k(\\check\n{X}^n(t-))\\bigl(e\\cdot\\check{X}^n(t-)\\bigr)^+>0\\bigr\\}.\n\\end{equation}\nFinally, if $(e\\cdot\\check{X}^n(t-))^+>0$ and $\\mathcal\n{K}(t-)=\\varnothing$, she picks for service a customer from the lowest\nindex nonempty queue.\n\\end{longlist}\n\\end{defin}\n\\begin{rem}\\label{remrandomization}\nFor our optimality-gap bounds and, in particular, for the proof of\nTheorem \\ref{thmSSC} it is important that the policy be such that\neach of the job classes in the set $\\mathcal{K}(t)$ gets a sufficient\nshare of the capacity. This prevents excessive oscillation of the\nqueues that may compromise the optimality gaps. Such oscillations could\narise if, for example, the policy chooses for service a~job of class\n\\[\ni=\\min\\argmax_{k\\in\\I}\\bigl\\{Q_k(t-)-h_k(\\check{X}^n(t-))\\bigl(e\\cdot\n\\check{X}^n(t-)\\bigr)^+\\dvtx Q_k^n(t-)>0\\bigr\\}.\n\\]\nRandomization is just one way\nto overcome such oscillations and, as the proofs (specifically that\nof Theorem \\ref{thmSSC}) reveal, any choice rule that guarantees a\nsufficient share of the capacity to a class in $\\mathcal{K}(t-)$ will\nsuffice.\n\\end{rem}\n\nOur main result shows that a (nonpreemptive) tracking policy can\nachieve a near optimal performance among preemptive policies. Note that\nin our setting under\\vadjust{\\goodbreak} preemption, one can restrict attention to\nwork-conserving policies, that is, policies under which the servers\nnever idle as long as there are jobs to work on.\\footnote{By a coupling\nargument, this can be shown to hold with general queueing costs\nprovided that there are no abandonments and that the service times are\nexponential; see, for example, the coupling argument on page 1126 of\n\\cite{AMR02}.} More precisely, a control is work conserving if the\nfollowing holds for all $t>0$:\n\\begin{equation}\\label{eqworkconservation}\ne\\cdot Q\\lam(t)=\\bigl(e\\cdot\\check{X}\\lam(t)\\bigr)^+.\n\\end{equation}\n\nHereafter, we focus on work-conserving controls. Each such control can\nbe mapped into a ratio control, which specifies what fraction of the\ntotal number of jobs in queue belongs to each class. To that end, let\n\\begin{equation}\\label{eqUQmap}\nU_i\\lam(t)=\\frac{Q_i\\lam(t)}{(e\\cdot Q\\lam(t))\\vee1} .\n\\end{equation}\nNote that the original control $Z^n$ can be recovered from the ratio\ncontrol~$U^n$ as follows:\n\\[\nZ_{i}^{n}(t) = X_i^n(t) - U_{i}^{n}(t) \\bigl(e\\cdot\\check{X}^n(t)\\bigr)^+ .\n\\]\nEquations (\\ref{eqdynamics1})--(\\ref{eqnon-negativity}) can then be\nreplaced by\n\\begin{eqnarray}\n\\label{eqdynamics2}\nX_i\\lam(t)&=&X_i\\lam(0)+\\mN_i^a(\\lambda_i^n t) \\nonumber\\\\[-8pt]\\\\[-8pt]\n&&{} -\\mN_i^d\\biggl(\\mu_i \\int_0^t \\bigl(\nX_i\\lam(s)-U_i\\lam(s) \\bigl(e\\cdot\\check{X}^n(t)\\bigr)^+ \\bigr) \\,ds\\biggr), \\nonumber\\\\\nQ_i\\lam(t)&=&U_i\\lam(t)\\bigl(e\\cdot\\check{X}^n(t)\\bigr)^+, \\\\\nZ_i\\lam(t)&=&X_i\\lam(t)-Q_i\\lam(t),\\\\\n\\check{X}_i\\lam(t)&=&X_i\\lam(t)-\\nu_i n,\\\\\n\\label{eqnon-negativity2}\nU\\lam(t)&\\in&\\mathcal{U},\\qquad Q\\lam(t)\\in\\mathbb{Z}_+^I,\\qquad X\\lam(t)\\in\n\\mathbb{Z}_+^I.\n\\end{eqnarray}\n\nDefine the filtration\n\\[\n\\bar{\\mathcal{F}}_t=\\sigma\\{\\mN_i^a(s),\\mN_i^d(s);i\\in\\I, s\\leq\nt\\}\n\\]\nand the $\\sigma$-field\n\\begin{equation}\\label{eqcheckFdefin}\n\\bar{\\mathcal{F}}_{\\infty}=\\bigvee\n_{t\\geq0} \\bar{\\mathcal{F}}_t.\n\\end{equation}\nInformally,\n$\\bar{\\mathcal{F}}_{\\infty}$ contains the information about the\nentire evolution of the processes $(\\mN_i^a,\\mN_i^d,i\\in\\I)$. A\nnatural notion of admissibility requires that the control is\nnonanticipative so that it only uses historical information about the\nprocess $X^n$ and about the arrivals and service completions up to the\ndecision epoch. To accommodate randomized policies (as the $h$-tracking\npolicy) we allow the control to use other information too as long as\nthis information is independent of $\\bar{\\mathcal{F}}_{\\infty}$.\n\\begin{defin}\\label{definadmissiblecontrols}\nA process $U=(U_i(t), t\\geq0, i \\in\\mathcal{I})$\nis a ratio control for the $n$th system if there exists a process\n$\\mathbb{X}\\lam=(X\\lam,Q\\lam,Z\\lam,\\check{X}\\lam)$ such that,\ntogether with the primitives, $(\\mathbb{X}\\lam,U)$ satisfies\n(\\ref{eqdynamics2})--(\\ref{eqnon-negativity2}). The process\n$U$ is an admissible ratio control if, in addition, it is adapted to\nthe filtration $\\mathcal{G}\\vee\\mathcal{F}_t\\lam$ where\n\\begin{eqnarray*}\n\\mathcal{F}_t\\lam&=&\\sigma\\biggl\\{\\mN_i^a(\\lambda\n_i^n s),X_i\\lam(s),\\mu_i\\int_0^s Z_i\\lam(u)\\,du,\\\\\n&&\\hphantom{\\sigma\\biggl\\{}\n\\mN_i^d\\biggl(\\mu_i\\int_0^s Z_i\\lam(u)\\,du\\biggr); i\\in\n\\I,0\\leq s\\leq t\\biggr\\},\n\\end{eqnarray*}\nand $\\mathcal{G}$ is a $\\sigma$-field that is independent of $\\bar\n{\\mathcal{F}}_{\\infty}$. The process $\\mathbb{X}^n$ is then said to\nbe the queueing process associated with the ratio control $U$. We let\n$\\Pi\\lam$ be the set of admissible ratio controls for the $n$th\nsystem.\n\\end{defin}\n\nRatio controls are work conserving by definition, but they need not be\nnonpreemptive in general. However, note that given a function $h\\dvtx \\bbR\n^I\\to\\mathcal{U}$, the (nonpreemptive) $h$-tracking policy\ncorresponds to a ratio control $U_h$, which is nonpreemptive. To be\nspecific, given the primitives and the $h$-tracking policy, one can\nconstruct the corresponding queueing process $\\mathbb\n{X}^n=(X^n,Q^n,Z^n,\\check{X}^n)$ (see the construction after Lemma\n\\ref{lemstrongappbounds}). Then the ratio control $U_h$ is\nconstructed using the relation (\\ref{eqUQmap}) so that $\\mathbb\n{X}^n$ and $U_h$ jointly satisfy (\\ref{eqdynamics2})--(\\ref\n{eqnon-negativity2}). Hence, one can speak of the ratio control and\nthe queueing process associated with an $h$-tracking policy. Note that\nsince the tracking policy makes resource allocation decisions using\nonly information on the state of the system at the decision epoch\n(together with a randomization that is independent of the history), the\nresulting ratio control is admissible in the sense of Definition \\ref\n{definadmissiblecontrols}. The terms ratio control and $h$-tracking\npolicy appear in several places in the paper. It will be clear from the\ncontext whether we refer to an arbitrary ratio control or to one\nassociated with an $h$-tracking policy.\n\nWe close this section by stating the main result of the paper. To that\nend, let\n\\begin{equation}\\label{eqmathXdefin}\n\\mathcal{X}^n=\\{(x,q)\\in\\mathbb{Z}_+^{2I}\\dvtx\nq=u(e\\cdot\nx-n)^+\\mbox{ for some } u\\in\\mathcal{U}\\}.\n\\end{equation}\nThat is, $\\mathcal{X}^n$ is the set on which $(X^n, Q^n)$ can\nobtain values under work conservation. In this set $e\\cdot q=(e\\cdot\nx-n)^+$ so that positive queue and idleness do not co-exist. We let\n$\\Ex_{x,q}^{U}[\\cdot]$ denote the expectation with respect to the\ninitial condition $(X^n(0),Q^n(0))=(x,q)$ and an admissible ratio\ncontrol $U$. Given a ratio control $U$ and initial conditions $(x,q)$,\nthe expected infinite horizon discounted cost in the $n$th system\nis given by\n\\begin{equation}\\label{eqcost1}\nC\\lam(x,q,U)=\\Ex_{x,q}^{U}\\biggl[\\int_0^{\\infty\n}e^{-\\gamma s}\nc\\cdot Q\\lam(s)\\,ds\\biggr],\n\\end{equation}\nwhere $c=(c_1,\\ldots,c_I)'$ is the strictly positive vector of holding\ncost rates and $\\gamma> 0$ is the discount rate. For $(x,q) \\in\n\\mathcal{X}^n$, the value function is given by\n\\[\nV\\lam(x,q)=\\inf_{U\\in\\Pi\\lam}\\Ex_{x,q}^{U}\\biggl[\\int_0^{\\infty\n} e^{-\\gamma s}c\\cdot Q\\lam(s)\\,ds\\biggr].\n\\]\n\nWe next state our main result.\n\\begin{theorem}\\label{thmmain} Fix a sequence $\\{(x^n,q^n),n\\in\\bbZ_+\\}\n$ such that\n\\mbox{$(x^n,q^n)\\in\\mathcal{X}^n$} and $|x^n-\\nu n| \\leq M \\sqrt{n}$ for\nall $n$ and some $M>0$. Then, there exists a sequence of tracking\nfunctions $\\{h^n,n\\in\\bbZ_+\\}$ together with constants $C,k>0$ (that\ndo not depend on $n$) such that\n\\[\nC\\lam(x^n,q^n,U_h^n)\\leq V\\lam(x^n,q^n)+C\\log^{k} n \\qquad\\mbox{for all\n} n,\n\\]\nwhere $U_h^n$ is the ratio control associated with the $h^{n}$-tracking\npolicy.\n\\end{theorem}\n\nThe constant $k$ in our bound may depend on all system and cost\nparameters but not on $n$. In particular, it may depend on $(\\mu\n_i,c_i,a_i;i\\in\\I)$ and $\\beta$. Its value is explicitly defined\nafter the statement of Theorem \\ref{thmHJB1sol}.\n\nTheorem \\ref{thmmain} implies, in particular, that the optimal\nperformance for nonpreemptive policies is close to that among the\nlarger family of preemptive policies. Indeed, we identify a\nnonpreemptive policy (a tracking policy) in the queueing model whose\ncost performance is close to the optimal value of the preemptive\ncontrol problem.\n\nThe rest of the paper is devoted to the proof of Theorem \\ref{thmmain},\nwhich proceeds by studying a sequence of auxiliary Brownian\ncontrol problems. The next subsection offers a heuristic derivation and\na justification for the relevance of the sequence of Brownian control\nproblems to be considered in later sections.\n\n\\subsection{Toward a Brownian control problem}\nWe proceed by deriving a sequence of approximating Brownian control\nproblems heuristically, which will be instrumental in deriving a\nnear-optimal policy for our original control problem. It is important\nto note that we derive an approximating Brownian control problem for\neach $n$ as opposed to deriving a single approximating problem (for the\nentire sequence of problems). This distinction is crucial for achieving\nan improved optimality gap for $n$ large because it allows us to tailor\nthe approximation to each element of the sequence of systems.\n\nTo this end, let\n\\[\nl_i^n=\\lambda_i^n-\\mu_i\\nu_i n \\qquad\\mbox{for } i\\in\\I.\n\\]\nFixing an admissible control $U^n$ for the $n$th system [and\ncentering as in (\\ref{eqtildeXdefin})], we can then write (\\ref\n{eqdynamics2}) as\n\\begin{equation}\\label{eqcheckXdynamics}\n\\check{X}_i\\lam(t)=\\check{X}_i\\lam(0)+l_i\\lam t-\\mu\n_i\\int_0^t\n\\bigl( \\check{X}_i\\lam(s)-U^n_i(s)\\bigl( e\\cdot\\check{X}\\lam(s)\\bigr)^+\n\\bigr) \\,ds+\\check{W}_i\\lam(t),\\hspace*{-30pt}\n\\end{equation}\nwhere\n\\begin{eqnarray}\\label{eqWtildedefin}\n\\check{W}_i\\lam(t)&=&\\mN_i^a(\\lambda_i^n t)-\\lambda_i^n\nt+\\mu_i\\int_0^t \\bigl( \\check{X}_i\\lam(s)-U^n_i(s)\\bigl( e\\cdot\\check\n{X}\\lam(s)\\bigr)^+ \\bigr) \\,ds\\nonumber\\\\[-8pt]\\\\[-8pt]\n&&{}-\n\\mN_i^d\\biggl(\\mu_i\\int_0^t \\bigl( \\check{X}_i\\lam(s)+\\nu_i\nn-U^n_i(s)\\bigl( e\\cdot\\check{X}\\lam(s)\\bigr)^+ \\bigr) \\,ds \\biggr).\\nonumber\n\\end{eqnarray}\nIn words, $\\check{W}_i\\lam(t)$ captures the deviations of the Poisson\nprocesses from their means. It is\\vspace*{1pt} natural to expect that an\napproximation result of the following form will hold: $(\\check\n{X}_i^n,\\check{W}_i^n;i\\in\\I)$ can be approximated by $(\\hX\n_i^n,\\hW_i^n;i\\in\\I)$ where\n\\begin{eqnarray*}\n\\hat{X}_i\\lam(t)&=&\\hat{X}_i\\lam(0)+l_i^n t -\\mu_i\\int_0^t \\bigl(\n\\hat{X}_i\\lam(s)-U^n_i(s)\\bigl(e\\cdot\\hat{X}\\lam(s)\\bigr)^+ \\bigr) \\,ds+\\hat\n{W}_i\\lam(t),\n\\\\\n\\hat{W}_i(t)&=&\\tilde{B}_i^a(\\lambda_i^n t)+\\tilde{B}_i^S\\biggl(\\mu\n_i\\int_0^t \\bigl( \\hat{X}_i\\lam(s)+\\nu_i n-U^n_i(s)\\bigl( e\\cdot\\hat\n{X}\\lam(s)\\bigr)^+ \\bigr) \\,ds\\biggr)\n\\end{eqnarray*}\nand $\\tilde{B}^a,\\tilde{B}^s$ are $I$-dimensional independent\nstandard Brownian motions. Moreover, by a time-change argument we can\nwrite (see, e.g., Theorem 4.6 in \\cite{KaS91})\n\\begin{eqnarray}\\label{eqhatX}\n\\hat{X}_i\\lam(t) &=&\\hat{X}_i\\lam\n(0)+l_i^n t -\\mu_i\\int_0^t \\hat{X}_i\\lam(s)-U^n_i(s)\\bigl(e\\cdot\\hat\n{X}\\lam(s)\\bigr)^+ \\,ds\\nonumber\\\\[-8pt]\\\\[-8pt]\n&&{}+\\int_0^t \\sqrt{\\lambda_i^n+\\mu_i\\bigl( \\hat\n{X}_i\\lam(s)+\\nu_i n-U^n_i(s)\\bigl( e\\cdot\\hat{X}\\lam(s)\\bigr)^+\n\\bigr)}\\,dB_i(s),\\nonumber\\hspace*{-30pt}\n\\end{eqnarray}\nwhere $B$ is an $I$-dimensional standard Brownian motion constructed by setting\n\\begin{eqnarray*}B_i(t)&=& \\int_0^t\\frac{d\\tilde{B}_i^{S}(\\mu\n_i\\int_0^s ( \\hat{X}_i\\lam(u)+\\nu_i n-U^n_i(u)( e\\cdot\\hat\n{X}\\lam(u))^+ ) \\,du)}{\\sqrt{\\mu_i ( \\hat\n{X}_i\\lam(s)+\\nu_i n-U^n_i(s)( e\\cdot\\hat{X}\\lam(s))^+\n)}}\\\\\n&&{} + \\frac{\\tilde{B}_i^a(\\lambda_i^nt)}{\\lambda_i^n t}.\n\\end{eqnarray*}\n\nTaking a leap of faith and arguing heuristically, we next consider a\nBrownian control problem with the system dynamics\n\\begin{equation}\\label{eqnbmdefn1}\n\\hat{X}\\lam(t)=x+\\int_0^t b\\lam(\\hat{X}\\lam\n(s),\\hat\n{U}^n(s))\\,ds+\\int_0^t \\sigma\\lam(\\hat{X}\\lam(s),\\hat\n{U}^n(s))\\,dB(t),\\hspace*{-30pt}\n\\end{equation}\nwhere $\\hat{U}^n$ will be\nan admissible control for the Brownian system and\n\\begin{equation}\n\\label{eqnbmdefn30}\nb_i\\lam(x,u)=l_i\\lam-\\mu_i\\bigl(x_i-u_i(e\\cdot x)^+\\bigr)\n\\end{equation}\nand\n\\begin{equation}\n\\label{eqnbmdefn3}\n\\sigma_i\\lam(x,u)=\\sqrt{\\lambda_i^n+\\mu_i \\nu_in +\\mu\n_i\\bigl(x_i-u_i(e\\cdot x)^+\\bigr)}.\n\\end{equation}\nNote that the Brownian control problem will only be used to propose a~candidate policy, whose near optimality will be verified from first\nprinciples without relying on the heuristic derivations of this section.\n\nTo repeat, the preceding definition is purely formal and provided only\nas a means of motivating our approach. In what follows, we will\ndirectly state and analyze an auxiliary Brownian control problem\nmotivated by the above heuristic derivation. The analysis of the\nauxiliary Brownian control problem lends itself to constructing near\noptimal policies for our original control problem. To be more specific,\nthe system dynamics equation (\\ref{eqnbmdefn1}), and in particular,\nthe fact that its variance is state and control dependent, is crucial\nto our results. Indeed, it is this feature of the auxiliary Brownian\ncontrol problems that yields an improved optimality gap.\n\nNeedless to say, one needs to take care in interpreting (\\ref\n{eqnbmdefn1})--(\\ref{eqnbmdefn3}), which are meaningful only up\nto a suitably defined hitting time. In particular, to have $\\sigma^n$\nwell defined, we restrict attention to the process while it is within\nsome bounded domain. Actually, it suffices for our purposes to fix\n$\\kappa>0$ and $m\\geq3$ and consider the Brownian control problem\nonly up to the hitting time of a ball of the form\n\\begin{equation}\\label{eqBkappadefin}\n\\mB_{\\kappa}^n=\\bigl\\{x\\in\\mathbb{R}^I\\dvtx |x|< \\kappa\n\\sqrt{n}\\log^m\nn\\bigr\\},\n\\end{equation}\nwhere \\mbox{$|\\cdot|$} denotes the Euclidian\nnorm. We will fix the constant $m$ throughout and suppress the\ndependence on $m$ from the notation. Setting\n\\begin{equation}\\label{eqnkappa}\nn(\\kappa)=\\inf\\{n\\in\n\\bbZ_+\\dvtx \\sigma^n(x,u)\\geq1 \\mbox{ for all } x\\in\\mB_{\\kappa}^n,\nu\\in\\mathcal{U}\\},\n\\end{equation}\nthe diffusion coefficient is strictly positive for all $n \\geq n(\\kappa\n)$ and $x\\in\\mB_{\\kappa}^n$. Note that, for all $i\\in\\I$, $x\\in\n\\mB_{\\kappa}^n$ and $u\\in\\mathcal{U}$,\n\\[\n(\\sigma_i^n(x,u))^2\\geq\\lambda_i^n+\\mu_i\\nu_in-2\\mu_i\\kappa\\sqrt\n{n}\\log^mn,\n\\]\nso that $(\\sigma_i^n(x,u))^2\\geq\\mu_i\\nu_in\/2\\geq1$ for all\nsufficiently large $n$ and, consequently, $n(\\kappa)<\\infty$.\n\\begin{rem} In what follows, and, in particular, through the proof of\nTheorem \\ref{thmmain}, the reader should note that while choosing the\nsize of the ball to be $\\epsilon n$ (with $\\epsilon$ small enough)\nwould suffice for the nondegeneracy of the diffusion coefficient, that\nchoice would be too large for our optimality gap proofs.\n\\end{rem}\n\n\\section{An approximating diffusion control problem (ADCP)}\\label{secresult}\n\nMotivated by the discussion in the preceding section, we define\nadmissible systems as follows.\n\\begin{defin}[(Admissible systems)]\\label{definadmissiblesystemBrownian}\nFix $\\kappa>0$, $n\\in\\bbZ\n_+$ and $x\\in\\mathbb{R}^I$. We refer to\n$\\theta=(\\Omega,\\mathcal{F},(\\mathcal{F}_t),\\mathbb{P},\\hat\n{U},B)$ as an admissible $(\\kappa,n)$-system if:\n\\begin{longlist}[(a)]\n\\item[(a)] $(\\Omega,\\mathcal{F},(\\mathcal{F}_t),\\mathbb{P})$ is\na complete filtered probability space.\\vadjust{\\goodbreak}\n\\item[(b)] $B(\\cdot)$ is an $I$-dimensional standard Brownian motion\nadapted to $(\\mF_t)$.\n\\item[(c)] $\\hat{U}$ is $\\mathcal{U}$-valued, $\\mF$-measurable\nand $(\\mF_t)$ progressively measurable.\n\\end{longlist}\n\nThe process $\\hat{U}$ is said to be the control associated with\n$\\theta$. We also say that $\\hat{X}$ is a controlled process\nassociated with the initial data $x$ and an admissible system $\\theta$\nif $\\hat{X}$ is a continuous $(\\mF_t)$-adapted process on $(\\Omega\n,\\mathcal{F},\\mathbb{P})$ such that, almost surely, for $t\\leq\\hat\n{\\tau}_{\\kappa}^n$,\n\\[\n\\hat{X}(t)=x+\\int_0^t b\\lam(\\hat{X}(s),\\hat{U}(s))\\,ds+\\int_0^t\n\\sigma\\lam(\\hat{X}(s),\\hat{U}(s))\\,d\\tilde{B}(t),\n\\]\nwhere $b^n(\\cdot,\\cdot)$ and $\\sigma^n(\\cdot,\\cdot)$ are as\ndefined in (\\ref{eqnbmdefn30}) and (\\ref{eqnbmdefn3}),\nrespectively, and\n$\\hat{\\tau}_{\\kappa}^n=\\inf\\{t\\geq0\\dvtx\\hat{X}(t)\\notin\\mB_{\\kappa\n}^n\\}$. Given $\\kappa>0$ and $n\\in\\bbZ_+$, we let $\\Theta(\\kappa\n,n)$ be the set of admissible $(\\kappa,n)$-systems.\n\\end{defin}\n\nThe Brownian control problem then corresponds to optimally choosing an\nadmissible $(\\kappa,n)$-system with associated control $(\\hat\n{U}(t),t\\geq0)$ that achieves the minimal cost in the optimization problem\n\\begin{equation}\\label{eqoptbrownian}\n\\hat{V}\\lam(x,\\kappa)=\\inf_{\\theta\\in\\Theta\n(\\kappa,n)}\\Ex\n_{x}^{\\theta}\\biggl[\\int_0^{\\hat{\\tau}_{\\kappa}^n} e^{-\\gamma s}\n\\sum_{i\\in\\I}c_i \\hat{U}_i(s)\\bigl(e\\cdot\\hat{X}(s)\\bigr)^+\\,ds\n\\biggr],\n\\end{equation}\nwhere $\\Ex_x^{\\theta}[\\cdot]$ is the\nexpectation operator when the initial state is $x\\in\\mathbb{R}^I$ and\nthe admissible system $\\theta$. Hereafter, we refer to (\\ref\n{eqoptbrownian}) as the \\textit{ADCP on} $\\mB_{\\kappa}^n$.\nThe following lemma shows that the Definition \\ref\n{definadmissiblesystemBrownian} is not vacuous. The proof appears in\nthe \\hyperref[app]{Appendix}.\n\\begin{lem} \\label{lemexistenceofcontrolled}\nFix the initial state $x\\in\\mathbb{R}^I$, $\\kappa>0$,\n$n\\geq n(\\kappa)$ and an admissible $(\\kappa,n)$-system $\\theta$.\nThen, there exists a unique controlled process $\\hat{X}$ associated\nwith $x$ and $\\theta$.\n\\end{lem}\n\nTo facilitate future analysis, note from the definition of $\\hat{\\tau\n}_k^n$ and (\\ref{eqoptbrownian}) that\n\\begin{equation}\\label{eqvaluebound}\n\\hat{V}^n(x,\\kappa)\\leq\n\\frac{1}{\\gamma}(e\\cdot c)\\kappa\\sqrt{n}\\log^m n.\n\\end{equation}\n\\begin{defin}[(Markov controls)]\n\\label{definmarkoviancontrols} We say that an admissible\n$(\\kappa,n)$-system $\\theta=(\\Omega,\\mathcal{F},(\\mathcal\n{F}_t),\\mathbb{P},\\hat{U},B)$ with the associated controlled process\n$\\hat{X}^n$ induces a Markov control if there exists a function\n$g^n(\\cdot)\\dvtx \\mathcal{B}_{\\kappa}^n \\to\\mathcal{U}$ such that\n$\\hat{U}(t)=g^n(\\hat{X}^n(t))$ for $t \\leq\\hat{\\tau}_{\\kappa}^n$.\nWe extend the function $g^n$ to $\\mathbb{R}^I$ as follows:\n\\begin{equation}\nh^n(x)= \\cases{\ng^n(x), &\\quad$x \\in\\mathcal{B}_{\\kappa}^{n}$, \\cr\ne_1, & \\quad otherwise,}\n\\end{equation}\nwhere $e_1$ is the $I$-dimensional vector whose first component is $1$\nwhile the others are $0$. We refer to $h^n(\\cdot)$ as the tracking\nfunction associated with the admissible system $\\theta$.\n\\end{defin}\n\nIn what follows, a policy $\\hat{U}$ will be called optimal for the\napproximating diffusion control problem (ADCP) on $\\mB_{\\kappa}^n$ if\nthere exists an admissible $(\\kappa,n)$-system $\\theta=(\\Omega\n,\\mathcal{F},(\\mathcal{F}_t),\\mathbb{P},\\hat{U},B)$ such that\n\\[\n\\hat{V}\\lam(x,\\kappa)=\\Ex_{x}^{\\theta}\\biggl[\\int_0^{\\hat{\\tau\n}_{\\kappa}^n} e^{-\\gamma s} \\sum_{i\\in\\I}c_i \\hat{U}_i(s)\\bigl(e\\cdot\n\\hat{X}(s)\\bigr)^+\\,ds\\biggr].\n\\]\n\nRecall that $X$ and $U$ are used to denote performance relevant\nstochastic processes in both the Brownian model and the original\nqueueing model, and that we add a hat, that is, we use $\\hat{X}$ and\n$\\hat{U}$ in the context of the Brownian model. To avoid confusion,\nthe reader should keep in mind that hat-processes correspond to the\nADCP while the ones with no hats correspond to the original queueing model.\n\n\\subsection*{Roadmap for the remainder of the paper} The main result in\nTheorem~\\ref{thmmain} builds on the following steps:\n\\begin{enumerate}\n\\item In Section \\ref{secADCP} we show that for each $n$, the HJB\nequation associated with the ADCP has a unique\\vadjust{\\goodbreak} and sufficiently smooth\nsolution. Using that solution we advance an optimal Markov control for\nthe ADCP together with the corresponding tracking function. We also\nidentify useful gradient bounds on the solutions to the sequence of HJB\nequations; cf. Theorem~\\ref{thmHJB1sol}.\n\n\\item In Section \\ref{sectracking} we conduct a performance analysis\nof $h$-tracking policies in the queueing system; cf. Theorem \\ref{thmSSC}.\n\n\\item The result of Theorem \\ref{thmSSC} together with the gradient\nestimates in Theorem \\ref{thmHJB1sol} are combined in a\nTaylor expansion-type argument in Section \\ref{seccombining} to\ncomplete the proof of Theorem \\ref{thmmain}.\n\\end{enumerate}\n\nAs a convention, throughout the paper we use the capital letter $C$ to\ndenote a constant that does not depend on $n$. The value of $C$ may\nchange from line to line within the proofs but it will be clear from\nthe context.\n\n\\section{Solution to the ADCP} \\label{secADCP} This section provides\na solution for the ADCP on $\\mB_{\\kappa}^n$ for each $n\\in\\bbZ$ and\n$\\kappa>0$. The HJB equation is a fully nonlinear and nonsmooth PDE.\nAs such, it requires extra care when compared with the usual semilinear\nPDEs that arise in the analysis of \\textit{asymptotically} optimal\ncontrols in the Halfin--Whitt regime. We will build on existing results\nin the theory of PDEs and proceed through the following steps: (a)\nestablish the existence and uniqueness of classical solutions; (b)\nrelate this unique solution to the value function of the ADCP and (c)\nestablish useful gradient estimates on the solution for the HJB\nequation. The last step is not necessary for existence and uniqueness\nbut is important for the analysis of optimality gaps.\n\nIn what follows, we fix $\\kappa>0$ and $n\\geq n(\\kappa)$ and suppress\nthe dependence of the solution to the HJB equation on $n$ and $\\kappa\n$. The following notation is needed to introduce the HJB equation.\nGiven a twice continuously differentiable function $\\phi$, define\n\\[\n\\phi_i=\\frac{\\partial\\phi}{\\partial x_i}\\quad\\mbox{and}\\quad \\phi_{ii}=\n\\frac{\\partial^2 \\phi}{\\partial x_i^2} .\n\\]\nAlso, define the operator $A^n_u$ for $u\\in\\mathcal{U}$ as follows:\n\\begin{equation}\\label{eqgendefin}\nA_{u}\\lam\\phi= \\sum_{i\\in\\I} b_i\\lam(\\cdot\n,u)\\phi_i+\\frac\n{1}{2}\\sum_{i\\in\\I} (\\sigma_i\\lam(\\cdot,u))^2 \\phi_{ii}.\n\\end{equation}\nDefining\n\\[\nL(x,u)=\\sum_{i\\in\\I}c_iu_i (e\\cdot x)^+\n\\]\nfor $x\\in\\bbR_+^I$ and $u\\in\\mathcal{U}$, the HJB equation is given by\n\\begin{equation} \\label{eqHJB0}\n0=\\inf_{u \\in\\mathcal{U}}\\{\nL(x,u)+A_{u}\\lam\\phi\n(x)-\\gamma\\phi(x)\\}.\n\\end{equation}\nSubstituting $b\\lam(\\cdot,\\cdot)$ and $\\sigma\\lam(\\cdot\n,\\cdot)$ into (\\ref{eqHJB0}) gives\n\\begin{eqnarray} \\label{eqHJB1simp}\n0&=& -\\gamma\\phi(x) + (e\\cdot x)^+\\cdot\\min_{i\\in\\I}\\biggl\\{\nc_i+\\mu_i\\phi_i(x)-\\frac{1}{2}\\mu_i\\phi_{ii}(x)\\biggr\\}\n\\nonumber\\\\[-8pt]\\\\[-8pt]\n&&{}+\\sum_{i\\in\\I} (l_i\\lam-\\mu_ix_i)\\phi_i(x)\n+\\frac{1}{2}\\sum_{i\\in\\I} \\bigl(\\lambda_i^n+\\mu_i(\\nu\n_in+x_i)\\bigr)\\phi_{ii}(x).\\nonumber\n\\end{eqnarray}\n\nOur analysis of the HJB equation (\\ref{eqHJB1simp}) draws on existing\nresults on fully nonlinear PDEs, and, in particular, the results on\nBellman--Pucci type equations; cf. Chapter 17 of \\cite{TandG}.\n\nIn what follows, fixing a set $\\mB\\subseteq\\mathbb{R}_+^I$, $\\mC\n^2(\\mB)$ denotes the space of twice continuously differentiable\nfunctions from $\\mB$ to $\\mathbb{R}$. For $u\\in\\mC^{2}(\\mB)$, we\nlet~$Du$ and $D^2u$ denote the gradient and the Hessian of $u$,\nrespectively. The space $\\mC^{2,\\alpha}(\\mB)$ is then the subspace\nof $\\mC^{2}(\\mB)$ members of which also have second derivatives that\nare H\\\"{o}lder continuous of order $\\alpha$. That is, a twice\ncontinuously differentiable function $u\\dvtx \\mathbb{R}^I\\to\\mathbb{R}$\nis in $\\mC^{2,\\alpha}(\\mB)$ if\n\\[\n\\sup_{x,y\\in\\mB, x\\neq y} \\frac{|D^2u(x)-D^2u(y)|}{|x-y|^{\\alpha\n}}<\\infty,\n\\]\nwhere \\mbox{$|\\cdot|$} denotes the Euclidian norm. We define\n$d_x\\,{=}\\,\\operatorname{dist}(x,\\partial\\mB)\\,{=}\\,\\inf\\{|x\\,{-}\\,y|,\\allowbreak\ny\\,{\\in}\\,\\partial\\mB\\}$ where\n$\\partial\\mB$ stands for the boundary of $\\mB$ and we let\n$d_{x,z}\\,{=}\\,\\min\\{d_x,d_z\\}$. Also, we define\n\\begin{equation}\\label{equstardefin}\n|u|^*_{2,\\alpha,\\mB}=\\sum_{j=0}^2 [u]_{j,\\mB}^*+\\sup_{x,y\\in\\mB,x\\neq\ny}d_{x,y}^{2+\\alpha}\\frac{|D^2u(x)-D^2 u(y)|}{|x-y|^{\\alpha}},\n\\end{equation}\nwhere $[u]_{j,\\mB}^*=\\sup_{x\\in\\mB}d_x^j\n|D^j u(x)|$ for $j=0,1,2$. Note that $d_x^j$ denote the $j$th power\nof $d_x$ and, similarly, $d_{x,y}^{2+\\alpha}$ is the $(2+\\alpha)$th\npower of $d_{x,y}$. Finally, we let $|u|^*_{0,\\mB}=[u]_{0,\\mB\n}^*=\\sup_{x\\in\\mB}|u(x)|$.\n\nIn the statement of the following theorem, $e_j$ is the $I$-dimensional\nvector with $1$ in the $j$th place and zeros elsewhere. Also, $\\mB\n_{\\kappa}^n$, $m$ and $n(\\kappa)$ are as defined in (\\ref\n{eqBkappadefin}) and (\\ref{eqnkappa}), respectively.\n\\begin{theorem}\\label{thmHJB1sol}\nFix $\\kappa>0$ and $n\\geq n(\\kappa)$. Then, there\nexists $0<\\alpha\\leq1$ (that does not depend on $n$) and a unique\nclassical solution $\\phi_{\\kappa}\\lam\\in\\mC^{0,1}(\\bar{\\mB}\n_{\\kappa}^n)\\cap\\mC^{2,\\alpha}(\\mB_{\\kappa}^n)$ to the HJB\nequation (\\ref{eqHJB1simp}) on $\\mB_{\\kappa}^n$ with the\nboundary condition $\\phi_{\\kappa}\\lam=0$ on $\\partial\\mB_{\\kappa\n}^n$. Furthermore, there exists a constant $C>0$ (that does not depend\non $n$) such that\n\\begin{equation}\\label{eqgradients0}\n|\\phi_{\\kappa}\\lam|^*_{2,\\alpha,\\mB_{\\kappa\n}^n}\\leq C\\sqrt\n{n}\\log^{k_0} n,\n\\end{equation}\nwhere $k_0=4m(1+1\/\\alpha)$. In turn, for any $\\vartheta<1$,\n\\begin{equation}\\label{eqgradients1}\n\\sup_{x\\in\\mB_{\\vartheta\\kappa}^n}|D\\phi_{\\kappa\n}^n(x)|\\leq\n\\frac{C}{1-\\vartheta}\\log^{k_1} n \\quad\\mbox{and}\\quad\\sup_{x\\in\\mB\n_{\\vartheta\\kappa}^n}|D^2\\phi_{\\kappa}^n(x)|\\leq\\frac\n{C}{1-\\vartheta}\\frac{\\log^{k_2}n} {\\sqrt{n}}\\hspace*{-26pt}\n\\end{equation}\nwith $k_1=k_0-m$ and $k_2=k_0-2m$. Also,\n\\begin{eqnarray}\\label{eqgenbound}\n&&\\sup_{u\\in\\mathcal{U}}\\biggl|\\sum_{i\\in\\I}\\bigl((\\phi_{\\kappa\n}^n)_{ii}(y)-(\\phi_{\\kappa}^n)_{ii}(x)\\bigr)(\\sigma_i^n(x,u))^2\n\\biggr|\\nonumber\\\\[-8pt]\\\\[-8pt]\n&&\\qquad\\leq\\frac{C}{1-\\vartheta}\\log^{k_1}n\\nonumber\n\\end{eqnarray}\nfor all $x,y\\in\\mB_{\\vartheta\\kappa}^n$ with $|x-y|\\leq1$.\n\\end{theorem}\n\nNote that (\\ref{eqgradients1}) follows immediately from (\\ref\n{eqgradients0}) through the definition of the operation \\mbox{$|\\cdot\n|^*_{2,\\alpha,\\mB_{\\kappa}^n}$} in (\\ref{equstardefin}).\nHenceforth, we will use $k_i,i=0,1,2$ for the values given in the\nstatement of Theorem \\ref{thmHJB1sol}. Moreover, the constant $k$\nappearing in the statement of Theorem \\ref{thmmain} is equal to $k_0+3$.\n\nTheorem \\ref{thmHJB1sol} facilitates a verification result, which we\nstate next followed by the proof of Theorem \\ref{thmHJB1sol}. Below,\n$\\hat{V}^n(x,\\kappa)$ is the value function of the ADCP; cf. equation\n(\\ref{eqoptbrownian}).\n\\begin{theorem}\\label{thmBrownianverification}\nFix $\\kappa>0$ and $n\\geq n(\\kappa)$. Let $\\phi\n_{\\kappa}^n$ be the unique solution to the HJB equation (\\ref\n{eqHJB1simp}) on $\\mB_{\\kappa}^n$ with the boundary condition $\\phi\n_{\\kappa}\\lam=0$ on $\\partial\\mB_{\\kappa}^n$. Then,\n$\\phi_{\\kappa}^n(x)=\\hat{V}^n(x,\\kappa)$ for all $x\\in\\mB_{\\kappa\n}^n$. Moreover, there exists a Markov control which is optimal for the\nADCP on $\\mB_{\\kappa}^n$. The tracking function $h_{\\kappa}^{*,n}$\nassociated with this optimal Markov control is defined by $h_{\\kappa\n}^{*,n}(x)=e_{i^n(x)}$, where\n\\begin{equation}\\label{eqixdefin}\ni^n(x)=\\min\\mathop{\\argmin}_{i\\in\\I} \\biggl\\{ \\biggl(c_i+\\mu\n_i(\\phi\n_{\\kappa}\\lam)_i(x)-\\frac{1}{2}\\mu_i(\\phi_{\\kappa}\\lam\n)_{ii}(x)\\biggr)(e\\cdot x)^+\\biggr\\}.\\hspace*{-25pt}\n\\end{equation}\n\\end{theorem}\n\nThe HJB equation (\\ref{eqHJB1simp}) has two sources of\nnondifferentiability. The first source is the minimum operation and the\nsecond is the nondifferentiability of the term $(e\\cdot x)^+$. The\nfirst source of nondifferentiability is covered almost entirely by the\nresults in \\cite{TandG}. To deal with the nondifferentiability of the\nfunction $(e\\cdot x)^+$, we use a construction by approximations. The\nproof of existence and uniqueness in Theorem \\ref{thmHJB1sol} follows\nan approximation scheme where one replaces the nonsmooth function\n$(e\\cdot x)^+$ by a smooth (parameterized by $a$) function $f_a(e\\cdot\nx)$. We show that the resulting ``perturbed'' PDE has a unique\nclassical solution and that as $a\\tinf$ the corresponding sequence of\nsolutions converges, in an appropriate sense, to a solution to~(\\ref\n{eqHJB1simp}) which will be shown to be unique. Note that this argument\nis repeated for each fixed $n$ and $\\kappa$.\n\nTo that end, given $a>0$, define\n\\begin{equation}\\label{eqfdefin}\nf_a(y)=\\cases{\ny, &\\quad$\\displaystyle y\\geq\\frac{1}{4a}$,\\vspace*{2pt}\\cr\n\\displaystyle ay^2+\\frac{1}{2}y+\\frac{1}{16a}, &\\quad$\\displaystyle -\\frac{1}{4a}\\leq y\\leq\\frac\n{1}{4a}$,\\vspace*{2pt}\\cr\n0, & \\quad otherwise.}\n\\end{equation}\nReplacing $(e\\cdot x)^+$ with $f_a(e\\cdot x)$ in (\\ref{eqHJB1simp})\ngives the following equation:\n\\begin{eqnarray} \\label{eqHJB2}\n0&=& - \\gamma\\phi(x)+f_a(e\\cdot x)\\cdot\\min_{i\\in\n\\I} \\biggl\\{ c_i+\\mu_i\\phi_i(x)-\\frac{1}{2}\\mu_i\\phi_{ii}(x)\n\\biggr\\} \\nonumber\\\\[-8pt]\\\\[-8pt]\n&&{}+\\sum_{i\\in\\I} (l_i\\lam-\\mu_ix_i)\\phi_i(x)\n+\\frac{1}{2}\\sum_{i\\in\\I} \\bigl(\\lambda_i^n+\\mu_i(\\nu\n_in+x_i)\\bigr)\\phi_{ii}(x).\\nonumber\n\\end{eqnarray}\n\nTo simplify this further, let $\\Gamma= \\mB_{\\kappa}^n\\times\\mathbb\n{R}_+\\times\\mathbb{R}^I\\times\\mathbb{R}^{I\\times I}$ and for all\n$y\\in\\Gamma$, define the function\n\\begin{equation}\\label{eqFForm}\nF_a^k[y]=\\min\\{F_a^1[y],\\ldots, F_a^I[y]\\},\n\\end{equation}\nwhere for $k\\in\\I$ and $y=(x,z,p,r)\\in\\Gamma$,\n\\begin{eqnarray}\\label{eqFdefin}\nF^k_a[y]&=&f_a(e\\cdot x)\n\\biggl[c_k+\\mu_kp_k-\\frac{1}{2}\\mu_kr_{kk}\\biggr]+\\sum_{i\\in\\I\n}(l_i\\lam-\\mu_ix_i)p_i\n\\nonumber\\\\[-8pt]\\\\[-8pt]\n&&{}+\\frac{1}{2}\\sum_{i\\in\\I} \\bigl(\\lambda_i^n+\\mu_i(\\nu\n_in+x_i)\\bigr)r_{ii}-\\gamma z.\\nonumber\n\\end{eqnarray}\nThen, (\\ref{eqHJB2}) can be rewritten as\n\\begin{equation}\\label{eqFFormPDe}\nF_a[x,u(x),Du(x),D^2u(x)]=0.\n\\end{equation}\n\nIn the following statement we use the gradient notation introduced at\nthe beginning of this section.\n\\begin{prop}\\label{propsolPHJB} Fix $\\kappa>0$, $n\\geq n(\\kappa)$ and $a>0$. A unique\nclassical solution $\\phi_{\\kappa,a}^n\\in\\mC^{0,1}(\\bar{\\mB}\n_{\\kappa}^n)\\cap\\mC^{2,\\alpha}(\\mB_{\\kappa}^n)$ exists for\nthe PDE (\\ref{eqHJB2}) on $\\mB_{\\kappa}^n$ with the boundary\ncondition $\\phi_{\\kappa,a}^n=0$ on $\\partial\\mB_{\\kappa}^n$. Moreover,\n\\begin{equation}\\label{eqgradients}\n|\\phi^n_{\\kappa,a}|^*_{2,\\alpha,\\mB_{\\kappa\n}^n}\\leq C |\\phi\n^n_{\\kappa,a}|^*_{0,\\mB_{\\kappa}^n}\\log^{k_0}n \\leq\\tilde{C}\n\\end{equation}\nfor $k_0=4m(1+1\/\\alpha)$ where\n$0<\\alpha\\leq1$ and $C>0$ do not depend on $a$ and $n$ and $\\tilde\n{C}$ does not depend on $a$. Also, $\\phi^n_{\\kappa,a}$ is Lipschitz\ncontinuous on the closure $\\bar{\\mB}_{\\kappa}^n$ with a\nLipschitz constant that does not depend on $a$ (but can depend on\n$\\kappa$ and $n$).\n\\end{prop}\n\nWe postpone the proof of Proposition \\ref{propsolPHJB} to the \\hyperref[app]{Appendix}\nand use it to complete the proof of Theorem \\ref{thmHJB1sol},\nfollowed by the proof of Theorem \\ref{thmBrownianverification}.\n\n\\subsection*{\\texorpdfstring{Proof of Theorem \\protect\\ref{thmHJB1sol}}{Proof of Theorem 4.1}}\n\nSince we fix $n$ and $\\kappa$, they will be suppressed below. We\nproceed to show the existence by an approximation argument. To that\nend, fix a sequence $\\{a^k;k\\in\\bbZ\\}$ with $a^k\\tinf$ as $k\\tinf$\nand let $\\phi_{a^k}$ be the unique solution to (\\ref{eqHJB2}) as\ngiven by Proposition \\ref{propsolPHJB}. The next step is to show\nthat $\\phi_{a^k}$ has a subsequence that converges in an appropriate\nsense to a function $\\phi$, which is, in fact, a solution to the HJB\nequation (\\ref{eqHJB1simp}). To that end, let\n\\begin{equation}\\label{eqCstar}\n\\mC_{*}^{2,\\alpha\n}(\\mB)=\\{u\\in\\mC^{2,\\alpha}(\\mB)\\dvtx |u|_{2,\\alpha,\\mB}^*<\\infty\\}.\n\\end{equation}\nThen, $\\mC_{*}^{2,\\alpha}(\\mB)$ is a Banach\nspace (see, e.g., Exercise 5.2 in \\cite{TandG}). Since the bound in\n(\\ref{eqgradients}) is independent of $a$, we have that $\\{\\phi\n_{a^k}\\}$ is a bounded sequence in $C_{*}^{2,\\alpha}(\\mB)$ and hence,\ncontains a convergent subsequence. Let $u$ be a limit point of the\nsequence $\\{\\phi_{a^k}\\}$. Since the gradient estimates in Proposition\n\\ref{propsolPHJB} are independent of $a$, they hold also for the\nlimit function $u$, that is,\n\\begin{equation}\\label{equ-bound}\n|u|_{2,\\alpha,\\mB}^*\\leq C|u|_{0,\\mB\n}^*\\log^{k_0} n\\leq\\tilde{C}\n\\end{equation}\nfor constants\n$\\alpha$ and $C$ that are independent of $n$. Proposition \\ref\n{propsolPHJB} also guarantees that the global Lipschitz constant is\nindependent of $a$ so that we may conclude that $u\\in\\mC\n^{0,1}(\\bar{\\mB})$ and that $u=0$ on $\\partial\\mB$.\n\nWe will now show that $u$ solves (\\ref{eqHJB1simp}) uniquely. To show\nthat $u$ solves (\\ref{eqHJB1simp}), we need to show that $F[u]=0$\n(where $F[\\cdot]$ is defined similar to $F_a[\\cdot]$ with $(e\\cdot\nx)^+$ replacing $f_a(e\\cdot x)$). To that end, let $\\{a^k,k\\in\\bbZ\\}\n$ be the corresponding convergent subsequence [i.e., such that $\\phi\n_{a^k}\\rightarrow u$ in $\\mC_*^{2,\\alpha}(\\mB)$]. Henceforth, to\nsimplify notation, we write\n\\[\nF_{\\akl}[\\phi_{\\akl}(x)]=F_{\\akl}[x,\\phi_{\\akl}(x),D\\phi_{\\akl\n}(x),D^2\\phi_{\\akl}(x)]\n\\]\n(and similarly for $F[\\cdot]$). Fix $\\delta\\,{>}\\,0$ and let $\\mB(\\delta\n)\\,{=}\\,\\{x\\,{\\in}\\,\\bbR^I\\dvtx |x|\\,{<}\\,\\kappa\\sqrt{n}\\log^m n\\,{-}\\,\\delta\\}$. Note that\nsince $\\phi_{\\akl}\\rightarrow u$ in $\\mC_*^{2,\\alpha}(\\mB)$ we\nhave, in particular, the convergence of $(\\phi_{\\akl}(x),D\\phi_{\\akl\n}(x),D^2\\phi_{\\akl}(x))\\rightarrow(u(x),Du(x),D^2u(x))$ uniformly in\n$x\\in\\mB(\\delta)$. The equicontinuity of the function\n$F^a[\\cdot]$\\vadjust{\\goodbreak}\non $\\Gamma$ guarantees then that\n\\begin{equation}\\label{eqinterim11}\n|F_{\\akl}[\\phi_{\\akl}(x)]-F_{\\akl}[u(x)]|\\leq\n\\epsilon\n\\end{equation}\nfor all $l$ large enough and $x\\in\\mB(\\delta)$.\nNote that $\\sup_{x\\in\\bbR^I}|f_{a}(e\\cdot x)-(e\\cdot x)^+|\\leq\n\\epsilon$ for all $a$ large enough so that,\n\\begin{equation}\\label{eqinterim12}\n\\sup_{x\\in\\mB}|F_{\\akl}[u(x)]-F[u(x)]|\\leq\\epsilon\n\\end{equation}\nfor all $l$ large enough. Combining (\\ref{eqinterim11}) and (\\ref\n{eqinterim12}), we then have\n\\[\n\\sup_{x\\in\\mB}|F_{\\akl}[\\phi_{\\akl}(x)]-F[u(x)]|\\leq2\\epsilon\n\\]\nfor all $l$ large enough and $x\\in\\mB(\\delta)$. By definition\n$F^{a^k}[\\phi_{\\akl}(x)]=0$ for all $x\\in\\mB$ and since $\\epsilon$\nwas arbitrary we have that $F[u(x)]=0$ for all $x\\in\\mB(\\delta)$.\nFinally, since $\\delta$ was arbitrary we have that $F[u(x)]=0$ for all\n$x\\in\\mB$. We already argued that $u=0$ on $\\partial\\mB$, so that\n$u$ solves (\\ref{eqHJB1simp}) on~$\\mB$ with $u=0$ on~$\\partial\\mB\n$. This concludes the proof of existence of a solution to (\\ref\n{eqHJB1simp}) that satisfies the gradient estimates (\\ref{eqgradients0}).\n\nFinally, the uniqueness of the solution to (\\ref{eqHJB1simp}) follows\nfrom Corollary 17.2 in \\cite{TandG} noting that the function\n$F[x,z,p,r]$ is indeed continuously differentiable in the $(z,p,r)$\narguments and it is decreasing in $z$ for all $(x,p,r)$.\n\nUsing Theorem \\ref{thmBrownianverification} [which only uses the\nexistence and uniqueness of the solution $\\phi_{\\kappa}^n(x)$ that we\nalready established] together with (\\ref{eqvaluebound}) we have that\n\\[\n|\\phi_{\\kappa}^n|_{0,\\mB_{\\kappa}^n}=\\sup_{x\\in\\mB_{\\kappa\n}^n}\\hat{V}^n(x,\\kappa)\\leq\\frac{1}{\\gamma}\\kappa\\sqrt{n}\\log^m n.\n\\]\nThe bounds (\\ref{eqgradients0}) and (\\ref{eqgradients1}) now follow\nfrom (\\ref{equ-bound}) and we turn to prove (\\ref{eqgenbound}).\n\nTo that end, since $\\phi_{\\kappa}^n$ solves\n(\\ref{eqHJB1simp}), fixing $x,y\\in\\mB_{\\kappa}^n$ we have\n\\begin{eqnarray}\\label{eqinternational}\\quad\n&&\\biggl|\\frac{1}{2}\\sum_{i\\in\\I}\n\\bigl(\\lambda_i^n+\\mu_i(\\nu_in+x_i)\\bigr)(\\phi_{\\kappa}^n)_{ii}(x)-\n\\frac{1}{2}\\sum_{i\\in\\I} \\bigl(\\lambda_i^n+\\mu_i(\\nu\n_in+y_i)\\bigr)(\\phi_{\\kappa}^n)_{ii}(y)\\biggr|\\nonumber\\\\\n&&\\qquad \\leq\\gamma|\\phi_{\\kappa}^n(x)-\\phi_{\\kappa\n}^n(y)|\\nonumber\\\\[-8pt]\\\\[-8pt]\n&&\\qquad\\quad{} +\\biggl|(e\\cdot x)^+\\cdot\\min\n_{i\\in\\I}\\biggl\\{ c_i+\\mu_i(\\phi_{\\kappa}^n)_i(x)-\\frac{1}{2}\\mu\n_i(\\phi_{\\kappa}^n)_{ii}(x)\\biggr\\}\\nonumber\\\\\n&&\\qquad\\quad\\hphantom{{}+\\biggl|}{} -(e\\cdot y)^+\\cdot\\min_{i\\in\\I}\\biggl\\{\nc_i+\\mu_i(\\phi_{\\kappa}^n)_i(y)-\\frac{1}{2}\\mu_i(\\phi_{\\kappa\n}^n)_{ii}(y)\\biggr\\}\\biggr|.\\nonumber\n\\end{eqnarray}\nWe will now bound each of the elements on the right-hand side. To that\nend, let $i(x)$ be as defined in (\\ref{eqixdefin}) and for each\n$x,z\\in\\mB_{\\vartheta\\kappa}^n$ define\n\\[\nM_{i(x)}^n(z)=c_{i(x)}+\\mu_{i(x)}(\\phi_{\\kappa}^n)_{i(x)}(z)-\\tfrac\n{1}{2}\\mu_{i(x)}(\\phi_{\\kappa}^n)_{i(x)i(x)}(z).\n\\]\nUsing (\\ref{eqgradients1}), we have by the mean value theorem that\n\\begin{equation}\\label{eqinterinter}\n|\\phi_{\\kappa}^n(x)-\\phi_{\\kappa}^n(y)|\\leq\n{|x-y|\\max_{i\\in\\I\n}\\sup_{z\\in\\mB_{\\vartheta\\kappa}^n}}|(\\phi_{\\kappa}^n)_i(z)|\\leq\nC\\log^{k_1} n\\vadjust{\\goodbreak}\n\\end{equation}\nfor all $x,y\\in\\mB\n_{\\vartheta\\kappa}^n$ with $|x-y|\\leq1$, and we turn to bound the\nsecond element on the right-hand side of (\\ref{eqinternational}).\nHere, there are two cases to consider. Suppose first that\n$i(x)=i(y)=i$. Then, using (\\ref{eqgradients1}) and the mean value\ntheorem we have\n\\[\n|(\\phi_{\\kappa}^n)_{i}(x)-(\\phi_{\\kappa}^n)_{i}(y)|\\leq{|x-y|\\max\n_{i\\in\\I}\\sup_{z\\in\\mB_{\\vartheta\\kappa}^n}}|(\\phi_{\\kappa\n}^n)_{ii}(z)|\\leq C\\frac{\n\\log^{k_2} n}{\\sqrt{n}}\n\\]\nand, in turn, that\n\\begin{equation} \\label{eqMbound}\n|M_{i}^n(x)-M_{i}^n(y)|\\leq C\\frac{\\log^{k_2}\nn}{\\sqrt{n}}\n\\end{equation}\nfor all $x,y\\in\\mB_{\\vartheta\\kappa}^n$ with $|x-y|\\leq1$. Now,\n$|x|\\vee|y|\\leq\\kappa\\sqrt{n}\\log^m n$ for all $x,y\\in\\mB\n_{\\vartheta\\kappa}^n$ and, by (\\ref{eqgradients1}), $\\sup_{z\\in\n\\mB_{\\vartheta\\kappa}^n}|(\\phi_{\\kappa}^n)_{ii}(z)|\\vee|(\\phi\n_{\\kappa}^n)_{i}(z)|\\leq C\\log^{k_1}n$ so that\n\\begin{eqnarray}\\label{eqinterim222222}\n&& |(e\\cdot x)^+ M_i^n(x)-(e\\cdot y)^+\nM_i^n(y)|\\nonumber\\\\\n&&\\qquad\n\\leq\n\\kappa\\sqrt{n}\\log^m n | M_i^n(x)-M_i^n(y)|+\\sup_{z\\in\n\\mB_{\\vartheta\\kappa}^n}\n|M_i^n(z)|\\\\\n&&\\qquad\\leq C\\log^{k_1}n.\\nonumber\n\\end{eqnarray}\nIf, on the other hand, $i(x)\\neq i(y)$ then by the definition\nof $i(\\cdot)$,\n\\begin{eqnarray*}\n&&c_{i(x)}+\\mu_{i(x)}(\\phi_{\\kappa\n}^n)_{i(x)}(x)-\\tfrac{1}{2}\\mu_{i(x)}(\\phi_{\\kappa\n}^n)_{i(x)i(x)}(x)\\\\\n&&\\qquad \\leq\nc_{i(y)}+\\mu_{i(y)}(\\phi_{\\kappa}^n)_{i(y)}(x)-\\tfrac{1}{2}\\mu\n_{i(y)}(\\phi_{\\kappa}^n)_{i(y)i(y)}(x)\n\\end{eqnarray*}\nand\n\\begin{eqnarray*}\n&& c_{i(y)}+\\mu_{i(y)}(\\phi_{\\kappa\n}^n)_{i(y)}(y)-\\tfrac{1}{2}\\mu_{i(y)}(\\phi_{\\kappa\n}^n)_{i(y)i(y)}(y)\\\\\n&&\\qquad \\leq\nc_{i(x)}+\\mu_{i(x)}(\\phi_{\\kappa}^n)_{i(x)}(y)-\\tfrac{1}{2}\\mu\n_{i(x)}(\\phi_{\\kappa}^n)_{i(x)i(x)}(y).\n\\end{eqnarray*}\nThat is,\n\\begin{equation} \\label{eqyetonemore1}\nM_{i(x)}^n(x)\\leq M_{i(y)}^n(x) \\quad\\mbox{and}\\quad\nM_{i(y)}^n(y)\\leq M_{i(x)}^n(y).\n\\end{equation}\nUsing\n(\\ref{eqgradients1}) as before we have for $x,y\\in\\mB_{\\vartheta\n\\kappa}^n$ with $|x-y|\\leq1$ and $i(x)\\neq i(y)$ that\n\\[\n\\bigl| M_{i(x)}^n(x)-M_{i(x)}^n(y)\\bigr|+\\bigl|\nM_{i(y)}^n(x)-M_{i(y)}^n(y)\\bigr| \\leq C\\frac{\\log^{k_2} n}{\\sqrt{n}}.\n\\]\nBy (\\ref{eqyetonemore1}) we then have that\n\\begin{eqnarray*}\n\\bigl| M_{i(x)}^n(x)-M_{i(y)}^n(y)\\bigr|&\\leq&\\bigl|\nM_{i(x)}^n(x)-M_{i(x)}^n(y)\\bigr|\\\\\n&&{}+ \\bigl|\nM_{i(y)}^n(x)-M_{i(y)}^n(y)\\bigr| \\\\\n&\\leq& C\\frac{\\log^{k_2}n}{\\sqrt{n}}\n\\end{eqnarray*}\nfor all such $x$ and $y$. In turn, since $|x|\\vee|y|\\leq\\kappa\\sqrt\n{n}\\log^m n$,\n\\begin{equation}\\label{eqinterim222224}\n\\bigl| (e\\cdot x)^+M_{i(x)}^n(x)- (e\\cdot\ny)^+M_{i(y)}^n(y)\\bigr|\\leq C\\log^{k_1} n\n\\end{equation}\nfor $x,y\\in\\mB_{\\vartheta\\kappa}^n$ with $|x-y|\\leq1$ and\n$i(x)\\neq i(y)$. Plugging (\\ref{eqinterinter}), (\\ref{eqinterim222222})\nand (\\ref{eqinterim222224}) into the right-hand\nside of (\\ref{eqinternational}) we get\n\\begin{eqnarray}\\label{eqtheinternational}\\qquad\n&&\\biggl|\\frac{1}{2}\\sum_{i\\in\\I} \\bigl(\\lambda\n_i^n+\\mu_i(\\nu_in+x_i)\\bigr)(\\phi_{\\kappa}^n)_{ii}(x)-\n\\frac{1}{2}\\sum_{i\\in\\I} \\bigl(\\lambda_i^n+\\mu_i(\\nu\n_in+y_i)\\bigr)(\\phi_{\\kappa}^n)_{ii}(y)\\biggr|\\nonumber\\\\[-8pt]\\\\[-8pt]\n&&\\qquad\n\\leq C \\log^{k_1} n\\nonumber\n\\end{eqnarray}\nfor all $x,y\\in\\mB_{\\vartheta\\kappa}^n$ with $|x-y|\\leq1$.\nFinally, recall that\n\\[\n\\sigma_i\\lam(x,u)=\\sqrt{\\lambda_i^n+\\mu_i \\nu_in +\\mu\n_i\\bigl(x_i-u_i(e\\cdot x)^+\\bigr)}\n\\]\nso that for all $u\\in\\mathcal{U}$,\n\\begin{eqnarray*}\n\\hspace*{-4pt}&&\\biggl|\\sum_{i\\in\\I}\\bigl((\\phi_{\\kappa\n}^n)_{ii}(y)-(\\phi_{\\kappa}^n)_{ii}(x)\\bigr)(\\sigma_i^n(x,u))^2\\biggr|\n\\\\\n\\hspace*{-4pt}&&\\qquad =\\biggl|\\sum_{i\\in\\I}(\\phi_{\\kappa}^n)_{ii}(y)\\bigl(\\lambda\n_i^n+\\mu_i \\nu_in +\\mu_i\\bigl(x_i-u_i(e\\cdot x)^+\\bigr)\\bigr)\\\\\n\\hspace*{-4pt}&&\\qquad\\quad\\hspace*{3.5pt}{}\n-(\\phi_{\\kappa}^n)_{ii}(x)\\bigl(\\lambda_i^n+\\mu_i \\nu_in +\\mu\n_i\\bigl(x_i-u_i(e\\cdot x)^+\\bigr)\\bigr)\\biggr|\n\\\\\n\\hspace*{-4pt}&&\\qquad\\leq\\biggl|\\frac{1}{2}\\sum_{i\\in\\I} \\bigl(\\lambda_i^n+\\mu\n_i(\\nu_in+x_i)\\bigr)(\\phi_{\\kappa}^n)_{ii}(x)\\\\\n\\hspace*{-4pt}&&\\qquad\\quad\\hspace*{2pt}{}-\n\\frac{1}{2}\\sum_{i\\in\\I} \\bigl(\\lambda_i^n+\\mu_i(\\nu\n_in+y_i)\\bigr)(\\phi_{\\kappa}^n)_{ii}(y)\\biggr|\\\\\n\\hspace*{-4pt}&&\\qquad\\quad{} +\\biggl|\\frac{1}{2}\\sum_{i\\in\\I} (\\phi_{\\kappa\n}^n)_{ii}(x)\\mu_iu_i(e\\cdot x)^+-(\\phi_{\\kappa}^n)_{ii}(y)\\mu\n_iu_i(e\\cdot y)^+\\biggr|\\\\\n\\hspace*{-4pt}&&\\qquad\\quad{} +\\biggl|\\frac{1}{2}\\sum_{i\\in\n\\I} (\\phi_{\\kappa}^n)_{ii}(y)\\mu_i(x_i-y_i)\\biggr|.\n\\end{eqnarray*}\nThe last two terms above are bounded by $C\\log^{k_1}n$ by (\\ref\n{eqgradients1}) and using $|x|\\vee|y|\\leq\\kappa\\sqrt{n}\\log^mn$.\nTogether with (\\ref{eqtheinternational}) this establishes (\\ref\n{eqgenbound}) and concludes the proof of the theorem.\n\n\\subsection*{\\texorpdfstring{Proof of Theorem \\protect\\ref{thmBrownianverification}}{Proof of Theorem 4.2}}\nFix an initial condition $x\\in\\mB_{\\kappa}^n$ and an admissible\n$(\\kappa,n)$-system $\\theta=(\\Omega,\\mathcal{F},(\\mathcal\n{F}_t),\\mathbb{P},\\hat{U},B)$ and let $\\hat{X}^n$ be the associated\ncontrolled process.\\vadjust{\\goodbreak} Using It\\^{o}'s lemma for the function $\\varphi\n(t,x)=e^{-\\gamma t} \\phi_{\\kappa}\\lam(x)$ in conjunction with the inequality\n\\[\nL(x,u)+A_{u}\\phi_{\\kappa}\\lam(x)-\\gamma\\phi_{\\kappa}\\lam(x)\\geq\n0 \\qquad\\mbox{for all } x\\in\\mB_{\\kappa}^n, u\\in\\mathcal{U}\n\\]\n[recall that $\\phi_{\\kappa}^n$ solves (\\ref{eqHJB1simp})] we have that\n\\begin{eqnarray}\\label{eqinterim3}\\qquad\n\\phi_{\\kappa}\\lam(x)&\\leq& \\Ex_x^{\\theta}\\int_0^{t\\wedge\\hat\n{\\tau}_{\\kappa}^n}e^{-\\gamma s} L(\\hat{X}^n(s),\\hat{U}(s))\\,ds+\\Ex\n_{x}^{\\theta}e^{-\\gamma(t\\wedge\\hat{\\tau}_{\\kappa}^n)}\\phi\n_{\\kappa}\\lam\\bigl(\\hat{X}^n(t\\wedge\\hat{\\tau}_{\\kappa}^n)\\bigr)\n\\nonumber\\\\[-8pt]\\\\[-8pt]\n&&{}-\\Ex_{x}^{\\theta}\\sum_{i\\in\\I}\\int_0^{t\\wedge\\hat{\\tau\n}_{\\kappa}^n}\ne^{-\\gamma s} (\\phi_{\\kappa}\\lam)_i(\\hat{X}^n(s))\\sigma_i^n(\\hat\n{X}^n(s),\\hat{U}(s)) \\,dB(s).\\nonumber\n\\end{eqnarray}\nHere, $\\hat{\\tau}_{\\kappa}^n$ is as defined in Definition \\ref\n{definadmissiblesystemBrownian} and it is a stopping time with respect\nto $(\\mathcal{F}_t)$ because of the continuity of $\\hat{X}^n$. We now\nclaim that\n\\[\n\\Ex_{x}^{\\theta}\\bigl[e^{-\\gamma t\\wedge\\hat{\\tau}_{\\kappa\n}^n}\\phi_{\\kappa}\\lam\\bigl(\\hat{X}^n(t\\wedge\\hat{\\tau}_{\\kappa\n}^n)\\bigr)\\bigr]\\rightarrow0 \\qquad\\mbox{as } t\\tinf.\n\\]\nIndeed, as $\\phi_{\\kappa} \\lam$ is bounded on $\\mB_{\\kappa}^n$, on\nthe event $\\{\\hat{\\tau}_{\\kappa}^n=\\infty\\}$ we have that\n\\[\ne^{-\\gamma(t\\wedge\\hat{\\tau}_{\\kappa}^n)}\\phi_{\\kappa}\\lam\n\\bigl(X(t\\wedge\\hat{\\tau}_{\\kappa}^n)\\bigr)\\rightarrow0 \\qquad\\mbox{as }t\\tinf.\n\\]\nOn the event $\\{\\hat{\\tau}_{\\kappa}^n<\\infty\\}$ we have $\\hat\n{X}^n(\\hat{\\tau}_{\\kappa}^n)\\in\\partial\\mB$ and, by the\ndefinition of $\\hat{\\tau}_{\\kappa}^n$, that $\\phi_{\\kappa}\\lam\n(\\hX^n(\\hat{\\tau}_{\\kappa}^n))=0$. The convergence in expectation\nthen follows from the bounded convergence theorem (using again the\nboundedness of $\\phi_{\\kappa}\\lam$ on $\\mB_{\\kappa}^n$). The last\nterm in (\\ref{eqinterim3}) equals zero by the optional stopping\ntheorem.\\vadjust{\\goodbreak}\n\nLetting $t\\tinf$ in (\\ref{eqinterim3}) and applying the monotone\nconvergence theorem, we then have\n\\[\n\\phi_{\\kappa}\\lam(x)\\leq\\Ex_x^{\\theta} \\biggl[\\int_0^{\\hat{\\tau\n}_{\\kappa}^n}e^{-\\gamma s} L(\\hat{X}^n(s),\\hat{U}(s))\\,ds\\biggr].\n\\]\n\nSince the admissible system $\\theta$ was arbitrary, we have that $\\phi\n_{\\kappa}\\lam(x)\\leq\\hat{V}\\lam(x,\\kappa)$. To show that this\ninequality is actually an equality, let\n\\begin{equation}\\label{eqratiofunchouce}\nh_{\\kappa\n}^n(x)=e_{i^n(x)},\n\\end{equation}\nwhere $e_{i^n(x)}$ is\nas defined in the statement of the theorem.\n\nThe continuity of $\\phi_{\\kappa}^n$ guarantees that the function\n$i^n(x)$ is Lebesgue measurable, and so is, in turn, $h_{\\kappa\n}^n(\\cdot)$. Consider now the autonomous SDE:\n\\begin{equation}\\label{eqautonomous}\n\\hX^n(t)=x+\\int\n_0^t \\hat{b}^n(\\hX^n(s))\\,ds+\\int_0^t \\hat{\\sigma}^n(\\hX\n^n(s))\\,dB(s),\n\\end{equation}\nwhere $\\hat\n{b}^n(y)=b^n(y,h_{\\kappa}^n(y))$ and $\\hat{\\sigma}^n(y)=\\sigma\n^n(y,h_{\\kappa}^n(y))$ on $\\mB_{\\kappa}^n$. Then, $\\hat{b}^n$ and\n$\\hat{\\sigma}^n$ are bounded and measurable on the bounded domain\n$\\mB_{\\kappa}^n$. Also, as the matrix $\\hat{\\sigma}^n$ is diagonal\nand the elements on the diagonal are strictly positive on $\\mB_{\\kappa\n}^n$, it is positive definite there. Hence, a weak solution exists for\nthe autonomous SDE (see, e.g.,\\vspace*{2pt} Theorem 6.1 of \\cite\n{krylov2008controlled}). In particular, there exists a probability\\vadjust{\\goodbreak}\nspace $(\\tilde{\\Omega},\\mathcal{G},\\tilde{\\Pd})$, a filtration\n$(\\mathcal{G}_t)$ that satisfies the usual conditions, a Brownian\nmotion $B(t)$ and a continuous process $\\hX^n$---both adapted to\n$(\\mathcal{G}_t)$, so that $\\hX^n$ satisfies the autonomous SDE (\\ref\n{eqautonomous}). Finally, since~$\\hX^n$ has continuous sample paths\nand it is adapted, it is also progressively measurable (see,\ne.g.,\\vspace*{1pt}\nProposition 1.13 in \\cite{KaS91}) and, by measurability of $h_{\\kappa\n}^n(\\cdot)$, so is the process $\\hat{U}(t)=h_{\\kappa}^n(\\hX^n(t))$.\nConsequently, $\\theta=(\\tilde{\\Omega},\\mathcal{G},\\mathcal\n{G}_t,\\tilde{\\Pd},\\hat{U},B)$ is an admissible system in the sense\nof Definition \\ref{definadmissiblesystemBrownian} and $\\hX^n$ is the\ncorresponding controlled process.\n\nTo see that $\\theta$ is optimal for the ADCP on $\\mB_{\\kappa}^n$,\nnote that for $s<\\hat{\\tau}_{\\kappa}^n$, we have by the HJB equation\n(\\ref{eqHJB0}) that\n\\[\nL(\\hX^n(s),\\hat{U}(s))+A_{\\hat{U}(s)}\\phi_{\\kappa}\\lam(\\hX\n^n(s))-\\gamma\\phi_{\\kappa}\\lam(\\hX^n(s))=0.\n\\]\nApplying It\\^{o}'s rule as before, together with the bounded and\ndominated convergence theorems, we then have that\n\\[\n\\phi_{\\kappa}\\lam(x)=\\Ex_x^{\\theta}\\biggl[\\int_0^{\\hat{\\tau\n}_{\\kappa}^n}e^{-\\gamma s} L(\\hX^n(s),\\hat{U}(s))\\,ds\\biggr]\n\\]\nand the proof is complete.\n\n\\section{The performance analysis of tracking policies}\\label{sectracking}\n\nThis section shows that given an optimal Markov control policy for the\nADCP together with its associated tracking function $h_{\\kappa\n}^{*,n}$, the nonpreemptive tracking policy imitates, in a particular\nsense, the performance of the Brownian system.\n\\begin{theorem} \\label{thmSSC}\nFix $\\kappa$ and $\\kappa'<\\kappa$ as well as a\nsequence $\\{(x^n,q^n),n\\in\\bbZ_+\\}$ such that $(x^n,q^n)\\in\\mathcal\n{X}^n$, and $|x^n-\\nu n| \\leq M \\sqrt{n}$ for all $n$ and some\n$M>0$. Let $\\phi_{\\kappa}^n$ and $h_{\\kappa}^{*,n}$ be as in Theorem\n\\ref{thmBrownianverification} and define\n\\[\n\\psi^n(x,u)=L(x,u)+A^n_u \\phi_{\\kappa}^n(x)-\\gamma\\phi_{\\kappa\n}^n(x) \\qquad\\mbox{for }x\\in\\mB_{\\kappa}^n,u\\in\\mathcal{U}.\n\\]\nLet $U_h^n$ be the ratio control associated with the $h_{\\kappa\n}^{*,n}$-tracking policy and let $\\mathbb{X}^n=(X^n,Q^n,Z^n,\\check\n{X}^n)$ be the associated queueing process with the initial conditions\n$Q^n(0)=q^n$ and $\\check{X}^n(0)=x^n-\\nu n$ and define\n\\[\n\\tau_{\\kappa',T}^n=\\inf\\{t\\geq0\\dvtx\\check{X}^n(t)\\notin\\mB_{\\kappa\n'}^n\\}\\wedge T\\log n.\n\\]\nThen,\n\\[\n\\Ex\\biggl[\\int_0^{\\tau_{\\kappa',T}^n} e^{-\\gamma s} |\\psi\n^n(\\check{X}^n(s),U_h^n(s))-\\psi^n(\\check{X}^n(s),h_{\\kappa\n}^{*,n}(\\check{X}^n(s)))|\\,ds\\biggr]\\leq C\\log^{k_0+3}n\n\\]\nfor a constant $C$ that does not depend on $n$.\n\\end{theorem}\n\nTheorem \\ref{thmSSC} is proved in the \\hyperref[app]{Appendix}. The proof builds on the\ngradient estimates in Theorem \\ref{thmHJB1sol} and on a state-space\ncollapse-type result for certain sub-intervals of $[0,\\tau_{\\kappa',T}^n]$.\n\\begin{rem} \\label{remSSC} Typically one establishes a stronger\nstate-space collapse result showing that the actual queue and the\ndesired queue values are close in supremum norm. The difficulty with\nthe former approach is that the tracking functions here are nonsmooth.\nWhile it is plausible that one can smooth these functions appropriately\n(as is done, e.g., in \\cite{AMR02}), such smoothing might\ncompromise the optimality gap. Fortunately, the weaker integral\ncriterion implied by Theorem\n\\ref{thmSSC} suffices for our purposes.\n\\end{rem}\n\n\\section{Proof of the main result}\\label{seccombining}\n\nFix $\\kappa>0$ and let $\\phi_{\\kappa}^n$ be the solution to (\\ref\n{eqHJB1simp}) on $\\mB_{\\kappa}^n$ (see Theorem \\ref{thmHJB1sol}).\nWe start with the following lemma where $b_i^n(\\cdot,\\cdot)$ and\n$\\sigma_i^n(\\cdot,\\cdot)$ are as in (\\ref{eqnbmdefn30}) and\n(\\ref{eqnbmdefn3}), respectively.\n\\begin{lem} \\label{lemito} Let $U^n$ be an admissible ratio control\nand let $\\mathbb{X}^n=(X^n,Q^n$, $Z^n,\\check{X}^n)$ be the queueing\nprocess associated with $U^n$. Fix $\\kappa'<\\kappa$ and $T>0$ and let\n\\[\n\\tau_{\\kappa',T}^n=\\inf\\{t\\geq0\\dvtx\\check{X}^n(t)\\notin\\mB_{\\kappa\n'}^n\\}\\wedge T\\log n.\n\\]\nThen, there exists a constant $C$ that does not depend on $n$ (but may\ndepend on $T$, $\\kappa$ and $\\kappa'$) such that\n\\begin{eqnarray*}\n\\Ex[e^{-\\gamma\\tau_{\\kappa',T}^n}\\phi_{\\kappa}^n(\\check\n{X}^n(\\tau_{\\kappa',T}^n))]&\\leq&\\phi_{\\kappa}^n(\\check{X}^n(0))+\n\\Ex\\biggl[\\int_0^{\\tau_{\\kappa',T}^n} e^{-\\gamma s}\nA_{U^n(s)}^n\\phi_{\\kappa}^n(\\check{X}^n(s))\\,ds\\biggr]\\\\\n&&{}-\\gamma\\Ex\n\\biggl[\\int_0^{\\tau_{\\kappa',T}^n} e^{-\\gamma s} \\phi_{\\kappa\n}^n(\\check{X}^n(s))\\,ds\\biggr] +C\\log^{k_1+1} n\\\\\n& \\leq&\n\\Ex[e^{-\\gamma\\tau_{\\kappa',T}^n}\\phi_{\\kappa}^n(\\check\n{X}^n(\\tau_{\\kappa',T}^n))]+2C\\log^{k_1+1}n.\n\\end{eqnarray*}\n\\end{lem}\n\nWe will also use the following lemma where $c=(c_1,\\ldots, c_I)$ are\nthe cost coefficients (see Section \\ref{secmodel}).\n\\begin{lem}\\label{lemafterstop}\nLet $(x^n,q^n)$ be as in the conditions of Theorem \\ref\n{thmmain}. Then, there exists a constant $C$ that does not depend on\n$n$ such that\n\\begin{equation}\\label{eqafterstop2}\n\\Ex_{x^n,q^n}^{U}\\biggl[\\int_{\\tau_{\\kappa\n',T}^n}^{\\infty} e^{-\\gamma s} (e\\cdot c)\\bigl(e\\cdot\\check\n{X}^n(s)\\bigr)^+\\,ds\\biggr]\\leq C\\log^2n\n\\end{equation}\nand\n\\begin{equation}\\label{eqafterstop1}\n\\Ex_{x^n,q^n}^{U}[e^{-\\gamma\\tau_{\\kappa\n',T}^n}\\phi\n_{\\kappa}^n(\\check{X}^n(\\tau_{\\kappa',T}^n))]\\leq C\\log^2\nn\n\\end{equation}\nfor all $n$ and any admissible ratio control\n$U$.\n\\end{lem}\n\nWe postpone the proof of Lemma \\ref{lemito} to the end of the section\nand that of Lemma \\ref{lemafterstop} to the \\hyperref[app]{Appendix} and proceed now to\nprove the main result of the paper.\n\n\\subsection*{\\texorpdfstring{Proof of Theorem \\protect\\ref{thmmain}}{Proof of Theorem 2.1}} Let $h_{\\kappa\n}^{*,n}$ be the ratio function associated with the optimal Markov\ncontrol for the ADCP (as in Theorem \\ref{thmHJB1sol}). Since $\\kappa\n$ is fixed we omit the subscript $\\kappa$ and use $h^n=h_{\\kappa\n}^{*,n}$. Let $U_h^n$ be the ratio associated with the $h^n$-tracking policy.\n\nThe proof will proceed in three main steps. First, building on Theorem~\\ref{thmSSC} we will show that\n\\begin{equation} \\label{eqinterim2}\n\\Ex\\biggl[\\int_0^{\\tau_{\\kappa\n',T}^n} e^{-\\gamma s} L(\\check{X}^n(s),U_h^n(s))\\,ds\\biggr] \\leq\\phi\n_{\\kappa}^n(\\check{X}^n(0))+C\\log^{k_0+3} n.\n\\end{equation}\nUsing Lemma \\ref{lemafterstop}, this implies\n\\begin{eqnarray}\\label{eqinterim13}\nC\\lam(x^n,q^n,U_h^n)&=&\\Ex\\biggl[\\int_0^{\\infty} e^{-\\gamma s} L(\\check\n{X}^n(s),U_h^n(s))\\,ds\\biggr]\\nonumber\\\\[-8pt]\\\\[-8pt]\n&\\leq&\\phi_{\\kappa}^n(\\check\n{X}^n(0))+C\\log^{k_0+3} n.\\nonumber\n\\end{eqnarray}\nFinally, we will\nshow that for any ratio control $U^n$,\n\\begin{equation}\\label{eqinterim303}\n\\phi_{\\kappa}^n(\\check\n{X}^n(0))\\leq\\Ex\\biggl[\\int_0^{\\infty} e^{-\\gamma s} L(\\check\n{X}^n(s),U^n(s))\\,ds\\biggr]+C\\log^{k_1+1}n,\n\\end{equation}\nwhere we recall that $k_1=k_0-m$. In turn,\n\\[\nV^n(x^n,q^n)\\geq\\phi_{\\kappa}^n(x^n-\\nu n)-C\\log^{k_1+1} n \\geq\nC\\lam(x^n,q^n,U_h^n)-2C\\log^{k_1+1}n,\n\\]\nwhich establishes the statement of the theorem.\\vadjust{\\goodbreak}\n\nWe now turn to prove each of (\\ref{eqinterim2}) and (\\ref{eqinterim303}).\n\n\\subsection*{\\texorpdfstring{Proof of (\\protect\\ref{eqinterim2})}{Proof of (61)}} To simplify\nnotation we fix $\\kappa>0$ throughout and let $h^n(\\cdot\n)=h_{\\kappa}^{*,n}$. Using Lemma \\ref{lemito} we have\n\\begin{eqnarray}\\label{eqinterim1}\n&&\\Ex[e^{-\\gamma\\tau_{\\kappa',T}^n}\\phi_{\\kappa}^n(\\check\n{X}^n(\\tau_{\\kappa',T}^n))]\\nonumber\n\\\\\n&&\\qquad\\leq \\phi_{\\kappa}^n(\\check{X}^n(0))+\n\\Ex\\biggl[\\int_0^{\\tau_{\\kappa',T}^n} e^{-\\gamma s}\nA_{U_h^n(s)}^n\\phi_{\\kappa}^n(\\check{X}^n(s))\\,ds\\biggr]\\\\\n&&\\qquad\\quad{}-\\gamma\\Ex\\biggl[\\int_0^{\\tau_{\\kappa',T}^n} e^{-\\gamma s} \\phi\n_{\\kappa}^n(\\check{X}^n(s))\\,ds\\biggr]+ C\\log^{k_1+1} n .\\nonumber\n\\end{eqnarray}\nFrom the definition of $h^n$ as a minimizer in the HJB equation we have that\n\\begin{eqnarray*}\n0&=& \\Ex\\biggl[\\int_0^{\\tau_{\\kappa',T}^n} e^{-\\gamma s}\nA_{h^n(\\check{X}^n(s))}^n\\phi_{\\kappa}^n(\\check{X}^n(s))\\,ds\\biggr]\\\\\n&&{}-\\gamma\\Ex\\biggl[\\int_0^{\\tau_{\\kappa',T}^n} e^{-\\gamma s} \\phi\n_{\\kappa}^n(\\check{X}^n(s))\\,ds\\biggr]\\\\\n&&{}+\\Ex\\biggl[\\int\n_0^{\\tau_{\\kappa',T}^n} e^{-\\gamma s} L(\\check{X}^n(s),h^n(\\check\n{X}^n(s)))\\,ds\\biggr] .\n\\end{eqnarray*}\nBy Theorem \\ref{thmSSC} we then have that\n\\begin{eqnarray}\\label{eqinterim101}\nC\\log^{k_0+3}n&\\geq& \\Ex\\biggl[\\int_0^{\\tau_{\\kappa',T}^n}\ne^{-\\gamma s} A_{U_h^n(s)}^n\\phi_{\\kappa}^n(\\check{X}^n(s))\\,ds\n\\biggr]\\nonumber\\\\[-2pt]\n&&{} -\\gamma\\Ex\\biggl[\\int_0^{\\tau_{\\kappa',T}^n}\ne^{-\\gamma s}\n\\phi_{\\kappa}^n(\\check{X}^n(s))\\,ds\\biggr]\\nonumber\\\\[-9pt]\\\\[-9pt]\n&&{} +\\Ex\\biggl[\\int_0^{\\tau_{\\kappa',T}^n} e^{-\\gamma s} L(\\check\n{X}^n(s),U_h^n(s))\\,ds\\biggr]\\nonumber\\\\[-2pt]\n&\\geq&0.\\nonumber\n\\end{eqnarray}\nSince $\\phi_{\\kappa}^n$ is nonnegative, combining (\\ref{eqinterim1})\nand (\\ref{eqinterim101}) we have that\n\\[\n\\Ex\\biggl[\\int_0^{\\tau_{\\kappa',T}^n} e^{-\\gamma s} L(\\check\n{X}^n(s),U_h^n(s))\\,ds\\biggr]\\leq\\phi_{\\kappa}^n(\\check\n{X}^n(0))+C\\log^{k_0+3} n,\n\\]\nwhich concludes the proof of (\\ref{eqinterim2}).\n\n\\subsection*{\\texorpdfstring{Proof of (\\protect\\ref{eqinterim303})}{Proof of (63)}} We now show that\n$V^n(x,q)\\geq\\phi_{\\kappa}^n(\\check{X}^n(0))-C\\log^{k_1+1} n$. To\nthat end, fix an arbitrary ratio control $U^n$ and recall that by the\nHJB equation,\n\\[\nA_u^n\\phi_{\\kappa}^n(x)-\\gamma\\phi_{\\kappa\n}^n(x)+L(x,u)\\geq0\n\\]\nfor all $u\\in\\mathcal{U}$ and $x\\in\\mB\n_{\\kappa}^n$. In turn, using the second inequality in Lemma \\ref\n{lemito} we have that\n\\begin{eqnarray*}\n&&\\Ex[e^{-\\gamma\\tau_{\\kappa\n',T}^n}\\phi_{\\kappa}^n(\\check{X}^n(\\tau_{\\kappa',T}^n))\n]\\\\[-2pt]\n&&\\qquad\\geq \\phi_{\\kappa}^n(\\check{X}^n(0))\n-\\Ex\\biggl[\\int_0^{\\tau_{\\kappa',T}^n} e^{-\\gamma s} L(\\check\n{X}^n(s),U^n(s))\\,ds\\biggr]\\\\[-2pt]\n&&\\qquad\\quad{} -2C\\log^{k_1+1} n.\n\\end{eqnarray*}\nUsing Lemma \\ref{lemafterstop}, we have, however, that\n\\[\n\\Ex[e^{-\\gamma\\tau_{\\kappa',T}^n}\\phi_{\\kappa}^n(\\check\n{X}^n(\\tau_{\\kappa',T}^n))]\\leq C\\log^{2} n\n\\]\nfor a redefined constant $C$ so that\n\\begin{eqnarray*}C\\log^{2} n &\\geq& \\phi\n_{\\kappa}^n(\\check{X}^n(0))-\n\\Ex\\biggl[\\int_0^{\\tau_{\\kappa',T}^n} e^{-\\gamma s} L(\\check\n{X}^n(s),U^n(s))\\,ds\\biggr]\\\\[-2pt]\n&&{}-2C\\log^{k_1+1} n\\\\&\\geq& \\phi_{\\kappa\n}^n(\\check{X}^n(0))-\n\\Ex\\biggl[\\int_0^{\\infty} e^{-\\gamma s} L(\\check\n{X}^n(s),U^n(s))\\,ds\\biggr]\\\\[-2pt]\n&&{}-2C\\log^{k_1+1} n\n\\end{eqnarray*}\nand, finally,\n\\[\n\\phi_{\\kappa}^n(\\check{X}^n(0))\\leq\\Ex\\biggl[\\int_0^{\\infty}\ne^{-\\gamma s}\nL(\\check{X}^n(s),U^n(s))\\,ds\\biggr]+C\\log^{k_1+1}n\\vadjust{\\goodbreak}\n\\]\nfor a redefined constant $C>0$. This concludes the proof of (\\ref\n{eqinterim303}) and of the theorem.\n\nWe end this section with the proof of Lemma \\ref{lemito} in which the\nfollowing auxiliary lemma will be of use.\n\\begin{lem}\\label{lemmartingales}\nFix $\\kappa>0$ and an admissible ratio control $U^n$ and\nlet $\\mathbb{X}\\lam=(X\\lam,Q\\lam,Z\\lam,\\check{X}\\lam)$ be the\ncorresponding queueing process. Let\n\\[\n\\tau_{\\kappa,T}^n=\\inf\\{t\\geq0\\dvtx\\check{X}^n(t)\\notin\\mB_{\\kappa\n}^n\\}\\wedge T\\log n,\n\\]\nand $(\\check{W}_i^n,i\\in\\I)$ be as defined in (\\ref{eqWtildedefin}).\nThen, for each $i\\in\\I$, the process $\\check\n{W}_i^n(\\cdot\\wedge\\tau_{\\kappa,n}^n)$ is a square integrable\nmartingale w.r.t to the filtration $(\\mathcal{F}_{t\\wedge\\tau\n_{\\kappa,T}^n}^n)$ as are the processes\n\\[\n\\mathcal{M}_i^n(\\cdot)=\\bigl(\\check{W}_i^n(\\cdot\\wedge\\tau_{\\kappa\n,T}^n)\\bigr)^2-\\int_0^{\\cdot\\wedge\\tau_{\\kappa,T}^n} (\\sigma\n_i^n(\\check{X}^n(s),U^n(s)))^2\\,ds\n\\]\nand\n\\[\n\\mathcal{V}_i^n(\\cdot)=\\bigl(\\check{W}_i^n(\\cdot\\wedge\\tau_{\\kappa\n,T}^n)\\bigr)^2-\\sum_{s\\leq\\cdot\\wedge\\tau_{\\kappa,T}^n} (\\Delta\\check\n{W}_i^n(s))^2.\n\\]\n\\end{lem}\n\nLemma \\ref{lemmartingales} follows from basic results on martingales\nassociated with time-changes of Poisson processes. The detailed proof\nappears in the \\hyperref[app]{Appendix}.\n\n\\subsection*{\\texorpdfstring{Proof of Lemma \\protect\\ref{lemito}}{Proof of Lemma 6.1}}\nNote that, as in\n(\\ref{eqcheckXdynamics}), $\\check{X}^n$ satisfies\n\\[\n\\check{X}_i\\lam(t)=\\check{X}_i\\lam(0)+\\int_0^t b_i^n(\\check\n{X}^n(s),U^n(s))\\,ds+\\check{W}_i\\lam(t),\n\\]\nand is a semi martingale. Applying It\\^{o}'s formula for\nsemimartingales (see, e.g., Theorem 5.92 in \\cite{vandervaart}) we\nhave for all $t\\leq\\tau_{\\kappa',T}^n$, that\n\\begin{eqnarray*}\ne^{-\\gamma t}\\phi_{\\kappa}^n(\\check{X}^n(t))&=&\\phi_{\\kappa\n}^n(\\check{X}^n(0))\\\\\n&&{}+\n\\sum_{s\\leq t\\dvtx |\\Delta\\check{X}^n(s)|> 0} e^{-\\gamma s}[\\phi\n_{\\kappa}^n(\\check{X}^n(s))-\\phi_{\\kappa}^n(\\check{X}^n(s-))]\\\\\n&&{}-\n\\sum_{i\\in\\I}\\sum_{s\\leq t\\dvtx |\\Delta\\check{X}^n(s)|> 0} e^{-\\gamma\ns}(\\phi_{\\kappa})_i^n(\\check{X}^n(s))\\Delta\\check{X}_i^n(s)\\\\\n&&{}+\n\\sum_{i\\in\\I}\\int_0^t e^{-\\gamma s} (\\phi_{\\kappa}^n)_i(\\check\n{X}^n(s-))b_i^n(\\check{X}^n(s),U^n(s))\\,ds\\\\\n&&{}-\\gamma\\int_0^t\ne^{-\\gamma s} \\phi_{\\kappa}^n(\\check{X}^n(s))\\,ds\n\\end{eqnarray*}\nand, after rearranging terms, that\n\\begin{eqnarray*}\n&&e^{-\\gamma t}\\phi_{\\kappa}^n(\\check{X}^n(t))\\\\\n&&\\qquad=\n\\phi_{\\kappa}^n(\\check{X}^n(0))\n+ \\frac{1}{2}\\sum_{i\\in\\I\n}\\sum_{s\\leq t\\dvtx |\\Delta\\check{X}^n(s)|>0}e^{-\\gamma s} (\\phi\n_{\\kappa}^n)_{ii}(\\check{X}^n(s-))(\\Delta\\check{X}_i^n(s))^2\\\\\n&&\\qquad\\quad{}+\n\\sum_{i\\in\\I}\\int_0^t e^{-\\gamma s} (\\phi_{\\kappa}^n)_i(\\check\n{X}^n(s-))b_i(\\check{X}^n(s),U^n(s))\\,ds\\\\\n&&\\qquad\\quad{}+ C^n(t)-\\gamma\\int_0^t\ne^{-\\gamma s} \\phi_{\\kappa}^n(\\check{X}^n(s))\\,ds,\n\\end{eqnarray*}\nwhere\n\\begin{eqnarray*} C^n(t)&=& \\sum_{s\\leq t\\dvtx |\\Delta\\check\n{X}^n(s)|>0}e^{-\\gamma s} \\biggl[\\phi_{\\kappa}^n(\\check\n{X}^n(s))-\\phi_{\\kappa}^n(\\check{X}^n(s-))\\\\\n&&\\hphantom{\\sum_{s\\leq t\\dvtx |\\Delta\\check\n{X}^n(s)|>0}e^{-\\gamma s} \\biggl[}{}-\\sum_{i\\in\\I}(\\phi\n_{\\kappa}^n)_i(\\check{X}^n(s-)) \\Delta\\check{X}_i^n(s) \\\\\n&&\\hphantom{\\sum_{s\\leq t\\dvtx |\\Delta\\check\n{X}^n(s)|>0}e^{-\\gamma s} \\biggl[}{}-\\frac\n{1}{2}\\sum_{i\\in\\I} (\\phi_{\\kappa}^n)_{ii}(\\check\n{X}^n(s-))(\\Delta\\check{X}_i^n(s))^2\\biggr].\n\\end{eqnarray*}\nSetting $t=\\tau_{\\kappa',T}^n$ as defined in the statement of the\nlemma and taking expectations on both sides we have\n\\begin{eqnarray}\\label{eqinterim404}\n&&\\Ex[e^{-\\gamma\\tau_{\\kappa',T}^n}\\phi_{\\kappa}^n(\\check\n{X}^n(t))]\\nonumber\\\\\n&&\\qquad=\\phi_{\\kappa}^n(\\check{X}^n(0))\n+\n\\sum_{i\\in\\I}\\Ex\\biggl[\\int_0^{\\tau_{\\kappa',T}^n} e^{-\\gamma s}\n(\\phi_{\\kappa}^n)_i(\\check{X}^n(s-)) b_i^n(\\check\n{X}^n(s),U^n(s))\\,ds\\biggr]\\nonumber\\\\[-8pt]\\\\[-8pt]\n&&\\qquad\\quad{}+\\frac{1}{2}\\sum\n_{i\\in\\I}\\Ex\\biggl[\\sum_{s\\leq t\\dvtx |\\Delta\\check\n{X}^n(s)|>0}e^{-\\gamma s} (\\phi_{\\kappa}^n)_{ii}(\\check\n{X}^n(s-))(\\Delta\\check{X}_i^n(s))^2\\biggr]\\nonumber\\\\\n&&\\qquad\\quad{}+ \\Ex\n[C^n(\\tau_{\\kappa',T}^n)]-\\gamma\\Ex\\biggl[\\int_0^{\\tau_{\\kappa\n',T}^n} e^{-\\gamma s} \\phi_{\\kappa}^n(\\check{X}^n(s))\\,ds\n\\biggr].\\nonumber\n\\end{eqnarray}\n\nWe will now examine each of the elements on the right-hand side of\n(\\ref{eqinterim404}). First, note that $\\Delta\\check\n{X}_i^n(s)=\\Delta\\check{W}_i^n(s)$ and, in particular,\n\\begin{eqnarray*}\n&&\\Ex\n\\biggl[\\sum_{s\\leq\\tau_{\\kappa',T}^n\\dvtx |\\Delta\\check\n{X}^n(s)|>0}e^{-\\gamma s} (\\phi_{\\kappa}^n)_{ii}(\\check\n{X}^n(s-))(\\Delta\\check{X}_i^n(s))^2\\biggr]\\\\\n&&\\qquad=\n\\Ex\n\\biggl[\\sum_{s\\leq\\tau_{\\kappa',T}^n\\dvtx |\\Delta\\check\n{X}^n(s)|>0}e^{-\\gamma s} (\\phi_{\\kappa}^n)_{ii}(\\check\n{X}^n(s-))(\\Delta\\check{W}_i^n(s))^2\\biggr].\n\\end{eqnarray*}\nUsing the fact that $\\mathcal{V}_i^n$, as defined in Lemma \\ref\n{lemmartingales}, is a martingale as well as the fact that $\\phi\n_{\\kappa}^n(\\check{X}^n(s))$ and its derivative processes are bounded\nup to~$\\tau_{\\kappa'}^n$, we have that the processes\n\\begin{equation}\\label{eqbarV}\n\\bar{\\mathcal{V}}_i^n(\\cdot):=\\int_0^{\\cdot\\wedge\n\\tau\n_{\\kappa',T}^n} e^{-\\gamma s}(\\phi_{\\kappa}^n)_{ii}(\\check\n{X}^n(s-))\\,d\\mathcal{V}_i^n(s)\n\\end{equation}\nand\n\\begin{equation}\\label{eqbarM}\n\\bar{\\mathcal{M}}_i^n(\\cdot):=\\int_0^{\\cdot\\wedge\n\\tau\n_{\\kappa',T}^n} e^{-\\gamma s}(\\phi_{\\kappa}^n)_{ii}(\\check\n{X}^n(s-))\\,d\\mathcal{M}_i^n(s)\n\\end{equation}\nare themselves\nmartingales with $\\bar{\\mathcal{V}}_i^n(0)=\\bar{\\mathcal\n{M}}_i^n(0)=0$ and in turn, by optional stopping, that\n$\\Ex[\\bar{\\mathcal{V}}_i^n(\\tau_{\\kappa',T}^n)]=\\Ex[\\bar\n{\\mathcal{M}}_i^n(\\tau_{\\kappa',T}^n)]$ (see,\\vspace*{1pt} e.g., Lemma 5.45 in\n\\cite{vandervaart}). In turn, by the definition of $\\mathcal\n{M}_i^n(\\cdot)$ and $\\mathcal{V}_i^n(\\cdot)$ we have\n\\begin{eqnarray*}\n&&\\Ex\\biggl[\\sum_{s\\leq\\tau_{\\kappa',T}^n\\dvtx\n|\\Delta\\check{X}^n(s)|>0}e^{-\\gamma s} (\\phi_{\\kappa\n}^n)_{ii}(\\check{X}^n(s-))(\\Delta\\check{W}_i^n(s))^2\\biggr]\\\\\n&&\\qquad=\\Ex\\biggl[\\int_0^t (\\phi_{\\kappa}^n)_{ii}(\\check\n{X}^n(s-))\\,d(\\check{W}_i^n(s))^2\\biggr]\\\\\n&&\\qquad=\\Ex\\biggl[\\int\n_0^{\\tau_{\\kappa',T}^n} e^{-\\gamma s} (\\phi_{\\kappa}^n)_{ii}(\\check\n{X}^n(s-))(\\sigma_i^n(\\check{X}^n(s),U^n(s)))^2\\,ds\\biggr].\n\\end{eqnarray*}\nPlugging this back into (\\ref{eqinterim404}) we have that\n\\begin{eqnarray*}\n&&\\Ex[e^{-\\gamma\\tau_{\\kappa',T}^n}\\phi_{\\kappa}^n(\\check\n{X}^n(t))]\\\\\n&&\\qquad=\\phi_{\\kappa}^n(\\check{X}^n(0))\n+\\sum_{i\\in\\I}\\Ex\\biggl[\\int_0^{\\tau_{\\kappa',T}^n} e^{-\\gamma s}\n(\\phi_{\\kappa}^n)_{i}(\\check{X}^n(s-)) b_i^n(\\check\n{X}^n(s),U^n(s))\\,ds\\biggr]\\\\\n&&\\qquad\\quad{}+\n\\frac{1}{2}\\sum_{i\\in\\I}\n\\Ex\\biggl[\\int_0^{\\tau_{\\kappa',T}^n} e^{-\\gamma s} (\\phi_{\\kappa\n}^n)_{ii}(\\check{X}^n(s-))(\\sigma_i^n(\\check{X}^n(s),U^n(s)) )^2\n\\,ds\\biggr]\\\\\n&&\\qquad\\quad{}-\\gamma\\Ex\\biggl[\\int_0^{\\tau_{\\kappa',T}^n}\ne^{-\\gamma s} \\phi_{\\kappa}^n(\\check{X}^n(s))\\,ds\\biggr]+ \\Ex\n[C^n(\\tau_{\\kappa',T}^n)],\n\\end{eqnarray*}\nwhich, using the definition of $A_u^n$ in (\\ref{eqgendefin}), yields\n\\begin{eqnarray*}\n&&\\Ex[e^{-\\gamma\\tau_{\\kappa',T}^n}\\phi_{\\kappa}^n(\\check\n{X}^n(t))]\\\\\n&&\\qquad=\\phi_{\\kappa}^n(\\check{X}^n(0))\n+\\Ex\\biggl[\\int_0^{\\tau_{\\kappa',T}^n} e^{-\\gamma s}\nA_{U^n(s)}^n\\phi_{\\kappa}^n(\\check{X}^n(s))\\,ds\\biggr]\\\\\n&&\\qquad\\quad{}-\\gamma\\Ex\n\\biggl[\\int_0^{\\tau_{\\kappa',T}^n} e^{-\\gamma s} \\phi_{\\kappa\n}^n(\\check{X}^n(s))\\,ds\\biggr]\\\\\n&&\\qquad\\quad{}+ \\Ex[C^n(\\tau_{\\kappa',T}^n)].\n\\end{eqnarray*}\nTo complete the proof it then remains only to show that there\nexists a~constant $C$ such that\n\\[\n|\\Ex[C^n(\\tau_{\\kappa',T}^{n})]|\\leq C\\log\n^{k_1+1} n.\n\\]\nTo that end, note that by Taylor's\nexpansion,\n\\begin{eqnarray*}\\phi_{\\kappa}^n(\\check{X}^n(s))&=&\\phi_{\\kappa\n}^n(\\check{X}^n(s-))\n+\\sum_{i\\in\\I}(\\phi_{\\kappa}^n)_i(\\check{X}^n(s-))\\Delta\n\\check{X}_i^n(s)\\\\\n&&{}\n+\\frac{1}{2}\\sum_{i\\in\\I}(\\phi_{\\kappa}^n)_{ii}\\bigl(\\check\n{X}^n(s-)+\\eta_{\\check{X}^n(s-)}\\bigr)\\Delta\\check{X}_i^n(s),\n\\end{eqnarray*}\nwhere\n$\\eta_{\\check{X}^n(s-)}$ is such that $\\check{X}^n(s-)+\\eta_{\\check\n{X}^n(s-)}$ is between $\\check{X}^n(s-)$ and $\\check{X}^n(s-)+\\Delta\n\\check{X}^n(s)$. In turn, adding and subtracting a term, we have that\n\\begin{eqnarray}\\label{eqCninterim}\n&&\\phi_{\\kappa}^n(\\check{X}^n(s))-\\phi_{\\kappa\n}^n(\\check{X}^n(s-))-\\sum_{i\\in\\I}(\\phi_{\\kappa}^n)_i(\\check\n{X}^n(s-)) \\Delta\\check{X}_i^n(s) \\nonumber\\\\\n&&\\quad{}-\\frac{1}{2}\\sum_{i\\in\\I}(\\phi_{\\kappa}^n)_{ii}(\\check\n{X}^n(s-))(\\Delta\\check{X}_i^n(s))^2\\\\\n&&\\qquad = \\sum_{i\\in\\I}\\frac\n{1}{2}\\bigl((\\phi_{\\kappa}^n)_{ii}\\bigl(\\check{X}^n(s-)+\\eta_{\\check\n{X}^n(s-)}\\bigr) -(\\phi_{\\kappa}^n)_{ii}(\\check{X}^n(s-))\\bigr)(\\Delta\n\\check{X}_i^n(s))^2.\\nonumber\n\\end{eqnarray}\nSince the jumps are of size $1$ and, with probability 1, there are no\nsimultaneous jumps, we have that $|\\eta_{\\check{X}^n(s-)}|\\leq1$.\nAdding the discounting, summing and taking expectations we have\n\\begin{eqnarray}\\label{eqinterim2222}\\quad\n&&\\Ex[C^n(t)]\\nonumber\\\\\n&&\\qquad\\leq\\Ex\\biggl[\\sum_{s\\leq t\\dvtx\n|\\Delta\\check{X}^n(s)|>0} e^{-\\gamma s} \\sum_{i\\in\\I}\\frac\n{1}{2}\\max_{y\\dvtx|y|\\leq1}\\bigl((\\phi_{\\kappa}^n)_{ii}\\bigl(\\check\n{X}^n(s-)+y\\bigr)\\\\\n&&\\qquad\\quad\\hspace*{163.6pt}{} -(\\phi_{\\kappa\n}^n)_{ii}(\\check{X}^n(s-))\\bigr)(\\Delta\\check{X}_i^n(s))^2\n\\biggr],\\nonumber\n\\end{eqnarray}\nand a lower bound can be created by minimizing over $y$ instead of\nmaximizing. Using again the fact that $\\Delta\\check{X}_i^n(t)=\\Delta\n\\check{W}_i^n(t)$ and that $\\bar{\\mathcal{M}}_i^n$ and $\\bar\n{\\mathcal{V}}_i^n$ as defined in (\\ref{eqbarM}) and (\\ref{eqbarV})\nare martingales, we have that\n\\begin{eqnarray}\\label{eqinterim505}\n\\Ex[C^n(t)]&\\leq&\\Ex\\biggl[\\int_0^t \\sum_{i\\in\\I\n}\\frac{1}{2}\\max_{y\\dvtx|y|\\leq1}\\bigl((\\phi_{\\kappa}^n)_{ii}\\bigl(\\check\n{X}^n(s-)+y\\bigr) \\nonumber\\\\\n&&\\hspace*{92.3pt}{} -(\\phi_{\\kappa}^n)_{ii}(\\check{X}^n(s-))\\bigr)\\\\\n&&\\hspace*{60.6pt}{}\\times(\\sigma\n_i^n(\\check{X}^n(s),U^n(s)))^2\\,ds\\biggr].\\nonumber\n\\end{eqnarray}\nFrom (\\ref{eqgenbound}) we have that\n\\begin{equation}\\label{eqinterim1111}\n\\frac{1}{2}\\biggl|\\sum_{i\\in\\I}\\bigl((\\phi_{\\kappa\n}^n)_{ii}(y)-(\\phi_{\\kappa}^n)_{ii}(x)\\bigr)(\\sigma_i^n(x,u))^2\n\\biggr|\\leq C\\log^{k_1} n\n\\end{equation}\nfor all $u\\in\\mathcal{U}$ and $x,y\\in\\mB_{\\kappa'}^n$ with\n$|x-y|\\leq1$. The proof is then concluded by plugging (\\ref\n{eqinterim1111}) into (\\ref{eqinterim505}), setting $t=\\tau_{\\kappa\n',T}^n$ and recalling that we can repeat all the above steps to obtain\na lower bound in (\\ref{eqinterim505}) by replacing $\\max_{y\\dvtx|y|\\leq\n1}$ with $\\min_{y\\dvtx|y|\\leq1}$ in~(\\ref{eqinterim2222}).\n\n\\section{Concluding remarks}\\label{secconclusions}\n\nThis paper proposes a novel approach for solving problems of dynamic\ncontrol of queueing systems in the Halfin--Whitt many-server\nheavy-traffic regime. Its main contribution is the use of Brownian\napproximations to construct controls that achieve optimality gaps that\nare logarithmic in the system size. This should be contrasted with the\noptimality gaps of size $o(\\sqrt{n})$ that are common in the\nliterature on asymptotic optimality. A distinguishing feature of our\napproach is the use of a \\textit{sequence} of Brownian control problems\nrather than a single (limit) problem. Having an entire sequence of\napproximating problems allows us to perform a more refined analysis,\nresulting in the improved optimality gap.\n\nIn further contrast with the earlier literature, in each of these\nBrownian problems the diffusion coefficient depends on both the system\nstate and the control. Incorporating the impact of control on diffusion\ncoefficients allows us to track the performance of the policy better\nbut, at the same time, it leads to a more complex diffusion control\nproblem in which the associated HJB equation is fully nonlinear and\nnonsmooth. For \\textit{each} Brownian problem, we show that the HJB\nequation has a sufficiently smooth solution that coincides with the\nvalue function and that admits an optimal Markov policy. Most\nimportantly, we derive useful gradient estimates that apply to the\nwhole sequence and bound the growth rate of the gradients with the\nsystem size. These bounds are crucial for controlling the approximation\nerrors when analyzing the original queueing system under the proposed\ntracking control.\n\nThe motivating intuition behind our approximation scheme is that the\nvalue functions of each queueing system and its corresponding Brownian\ncontrol problem ought to be close. In particular, the optimal control\nfor the Brownian problem should perform well for the queueing system.\nMoreover, the optimal Markov control of the Brownian problem can be\napproximated by a ratio (or tracking) control for the queueing system.\nWhile these observations are ``correct'' at a high level, they need to\nbe qualified further. Our analysis underscores two sources of\napproximation errors that need to be addressed in order to obtain the\nrefined optimality gaps.\nFirst, the value function of the Brownian control problem may be\nsubstantially different than that of the (preemptive) optimal control\nproblem for the queueing system. This difference must be quantified\nrelative to the system size, which we do indirectly through the\ngradient estimates for the value function of the Brownian control\nproblem; this is manifested, for example, in the proof of Lemma \\ref{lemito}.\n\nThe second source of error is in trying to imitate the optimal ratio\ncontrol of the approximating Brownian control by a tracking control in\nthe corresponding queueing system. The error arises because we insist\non having a~nonpreemptive control for the queueing system. Whereas\nunder a preemptive control, one may be able to rearrange the queues\ninstantaneously to match the tracking function of the Brownian system,\nthis is not possible with nonpreemptive controls. Instead, we carefully\nconstruct and analyze the performance of the proposed nonpreemptive\ntracking policy. In doing so, we prove that the tracking control\nimitates closely the Brownian system with respect to a specific\nintegrated functional of the queueing dynamics (see Theorem \\ref\n{thmSSC} and Remark \\ref{remSSC}). Here too, the gradient estimates\nfor the value function of the Brownian system play a key role.\n\nWhile the focus of this paper has been a relatively simple model\nto illustrate the key ideas behind our approach and the important steps\nin the analysis, we expect that similar results can be established in\nthe cases of impatient customers, more general cost structures as well\nas more general network structures.\n\nAs suggested by the preceding analysis, the viability of these\nextensions and others will depend on whether it is possible to (a)\nsolve the\nsequence of Brownian control problems and establish the necessary\ngradient estimates and (b) establish the corresponding approximation\nresult for the nonpreemptive tracking control.\n\nWhile we expect that the results of \\cite{TandG} on fully nonlinear\nelliptic PDEs can be invoked for the more general settings, extending\nour analysis which builds on those results may not be always straightforward.\nIn particular, it is not immediately obvious how to generalize the\nproof of the tracking result in Theorem \\ref{thmSSC} to more general settings.\n\nNevertheless, we can make some observations about the extensions\nmentioned above:\n\\begin{itemize}\n\\item\\textit{General convex costs.} As discussed in the\n\\hyperref[sec1]{Introduction}, the analysis of the convex holding cost case will\nprobably be simpler as one tends to get ``interior'' solutions in that\ncase as opposed to the corner solutions in the linear cost case, which\ncauses nonsmoothness.\nWe expect that the enhanced smoothness (relative to the linear holding\ncost case) will simplify the analysis of the HJB equations as well as\nthat of the tracking performance.\n\n\\item\\textit{Abandonment.} Our starting point in the analysis is that,\namong preemptive policies, work conserving policies are optimal. This\nis not, in general, true when customers are impatient and may abandon\nwhile waiting (see the discussion in Section 5.1 of \\cite{AMR02}). As is\nthe case in \\cite{AMR02}, our analysis will go through also for the\ncase of impatient customers provided that the cost structure is such\nthat work conservation is optimal among preemptive policies.\n\\item\\textit{General networks.} Inspired by the generalization of\n\\cite{AMR02}, by Atar \\cite{atar2005scheduling}, to tree-like networks,\nwe expect, for example, that such a generalization is viable in our\nsetting as well. Indeed, we expect that the analysis of the (sequence\nof) HJB equations and the sequence of ADCPs be fairly similar for the\ntree-like network setting. We expect that, in that more general\nsetting, it would be more complicated to bound the performance of the\ntracking policies as in Theorem~\\ref{thmSSC}.\n\\end{itemize}\n\n\\begin{appendix}\\label{app}\n\\section*{Appendix}\n\n\\subsection*{\\texorpdfstring{Proof of Lemma \\protect\\ref{lemexistenceofcontrolled}}{Proof of Lemma 3.1}}\n\nUp to $\\tau_{\\kappa}^n$, both functions $b^n(\\cdot,u)$ and $\\sigma\n^n(\\cdot,u)$ are bounded and Lipschitz continuous (uniformly in $u$).\nWith these conditions satisfied, strong existence and uniqueness follow\nas in Appendix D of \\cite{FlemingSoner}. Specifically, strong\nexistence follows by successive approximations as in the proof of\nTheorem~2.9 of \\cite{KaS91} and uniqueness follows as in Theorem 2.5\nthere.\n\n\\subsection*{\\texorpdfstring{Proof of Proposition \\protect\\ref{propsolPHJB}}{Proof of Proposition 4.1}} Fix\n$\\kappa>0$, $n\\in\\bbZ_+$ and $a>0$. Recall that (\\ref{eqHJB2})\ncorresponds to finding $\\phi_{\\kappa,a}^n\\in\\mC^2(\\mB)$ such that\n\\begin{equation}\\label{eqHJB1min}\n0=F_a[x,\\phi_{\\kappa,a}^n(x),D\\phi_{\\kappa\n,a}^n(x),D^2\\phi\n_{\\kappa,a}^n(x)],\\qquad x\\in\\mB,\n\\end{equation}\nand so that\n$\\phi_{\\kappa,a}^n=0$ on $\\partial\\mB$ where \\mbox{$F_a[\\cdot]$} is as\ndefined in (\\ref{eqFForm}). Then, Proposition~\\ref{propsolPHJB}\nwill follow from Theorem 17.18 in \\cite{TandG} upon verifying certain\nconditions. The gradient estimates will also follow from \\cite{TandG}\nby carefully tracing some constants to identify their dependence on\n$\\kappa,n$ and $a$.\n\nTo that end, note that the function $F_a^i(x,z,p,r)$ [as defined in\n(\\ref{eqFdefin})] is linear in the $(z,p,r)$ arguments for all $k\\in\n\\I$ and $x\\in\\mB$. In turn, this function is concave in these\narguments. Hence, to apply Theorem 17.18 of \\cite{TandG} it remains to\nestablish that condition (17.53) of \\cite{TandG} is satisfied for each\nof these functions. In the following we suppress the constant $a>0$\nfrom the notation. It suffices to show that there exist constants\n$\\underbar{\\Lambda}\\leq\\bar{\\Lambda}$ and $\\eta$ such that\nuniformly in $k\\in\\I$, $y=(x,z,p,r)\\in\\Gamma$, and $\\xi\\in\\mathbb{R}^I$\n\\begin{eqnarray}\\label{eqFcond1}\n&\\displaystyle\n0< \\underbar{\\Lambda}|\\xi^2|\\leq\\sum_{i,j}F^k_{i,j}[y] \\xi_i\\xi\n_j\\leq\\bar{\\Lambda} |\\xi|^2,&\\\\\n\\label{eqFcond2}\n&\\displaystyle \\max\\{\n|F^k_p[y]|,|F^k_z[y]|,|F^k_{rx}[y]|,|F^k_{px}[y]|,|F^k_{zx}[y]|\\}\n\\leq\\eta\\underbar{\\Lambda},&\n\\\\\n\\label{eqFcond3}\n&\\displaystyle\n\\max\\{ |F^k_x[y]|,|F^k_{xx}[y]|\\}\\leq\\eta\\underbar\n{\\Lambda}(1+|p|+|r|),&\n\\end{eqnarray}\nwhere\n\\[\nF^k_{i,j}(x,z,p,r)=\\frac{\\partial}{\\partial r_{ij}}F^k(x,z,p,r),\\qquad\nF^k_{x_l}(x,z,p,r)=\\frac{\\partial}{\\partial x_{l}}F^k(x,z,p,r)\\vadjust{\\goodbreak}\n\\]\nand\n\\[\n(F^k_{rx}(x,z,p,r))_{ilj}=\\frac{\\partial^2}{\\partial r_{il}\\,\\partial\nx_{j}}F^k(x,z,p,r).\\vspace*{-2pt}\n\\]\nThe other cross-derivatives are defined similarly. We will show that we\ncan choose $\\underbar{\\Lambda}=\\varepsilon_0 n$, $\\bar{\\Lambda}=\n\\varepsilon_1 n$, $\\eta=\\varepsilon_2$ for constants $\\varepsilon\n_0,\\varepsilon_1$ and $\\varepsilon_2$ that do not depend on $n$ and\n$a$---this will be important in establishing the aforementioned\ngradient estimates. To establish (\\ref{eqFcond1}) note that, given\n$\\xi\\in\\mathbb{R}^I$,\n\\begin{equation}\\label{eqFijDeriv}\\quad\nF^{k}_{ij}\\xi_i\\xi_j=\\cases{\n\\frac{1}{2}\\bigl(\\lambda_i^n+\\mu_i(\\nu_in+x_i)\\bigr)\\xi\n_i^2, &\\quad for $i=j, i\\neq k$,\\vspace*{1pt}\\cr\n\\frac{1}{2}\\bigl(\\lambda_i^n+\\mu_i(\\nu_in+x_i)\\bigr)\\xi_i^2 -\\frac\n{1}{2}f(e\\cdot x), &\\quad for $i=j=k$,\\vspace*{1pt}\\cr\n0, &\\quad otherwise.}\\vspace*{-2pt}\n\\end{equation}\nHence,\n\\[\n\\sum_{i,j}F_{ij}^k \\xi_i\\xi_j=\\frac{1}{2}\\sum_{i\\in\\I}\\bigl(\\lambda\n_i^n+\\mu_i(\\nu_in+x_i)\\bigr)\\xi_i^2-\\frac{1}{2}f(e\\cdot x)\\xi_k^2.\\vspace*{-2pt}\n\\]\n\nConsequently, for $(x,z,r,p)\\in\\Gamma$ we have that\n\\[\n\\sum_{i,j}F_{ij}^k \\xi_i\\xi_j\\leq I \\bigl( \\lambda+\\mu_{\\max\n}n+\\mu_{\\max}\\kappa\\sqrt{n}\\log^mn\\bigr)\\sum_{i\\in\\I}\\xi_i^2+\n\\frac{1}{2}\\kappa\\sqrt{n}\\log^m n\\xi_k^2,\\vspace*{-2pt}\n\\]\nwhere $\\mu_{\\max}=\\max_{k}\\mu_k$. In particular, we can choose\n$\\varepsilon_1>0$ so that for all $n\\in\\bbZ$,\n\\[\n\\sum_{i,j}F_{ij}^k \\xi_i\\xi_j\\leq\\varepsilon_1 n.\\vspace*{-2pt}\n\\]\nTo obtain the lower bound note that, for $y\\in\\Gamma$,\n\\[\n\\sum_{i,j}F_{ij}^k \\xi_i\\xi_j\\geq\\frac{1}{2}\\Bigl(\\min_{i\\in\\I\n}\\lambda_i^n+\\min_{i\\in\\I}\\mu_i\\kappa\\sqrt{n}\\log^m n\n\\Bigr)\\sum_{i\\in\\I}\\xi_i^2\n-\\frac{1}{2}\\xi_k^2\\kappa\\sqrt{n}\\log^mn.\\vspace*{-2pt}\n\\]\nHence, we can find $\\varepsilon_0>0$ such that for all n,\n\\[\n\\sum_{i,j}F_{ij}^k \\xi_i\\xi_j\\geq\\varepsilon_0 n.\\vspace*{-2pt}\n\\]\nNote that above $\\varepsilon_0$ and $\\varepsilon_1$ can depend on\n$\\kappa$ but they do not depend on $n$ and $a$. Hence, we have\nestablished (\\ref{eqFcond1}) and we turn to (\\ref{eqFcond2}). To\nthat end, note that\n\\begin{eqnarray}\\label{eqFp}\nF_{p_k}^k(x,z,p,r)&=&f(e\\cdot x)+l_k\\lam-\\mu\n_kx_k \\quad\\mbox{and }\\nonumber\\\\[-9pt]\\\\[-9pt]\nF_{p_i}^k(x,z,p,r)&=&l_i\\lam-\\mu_ix_i \\qquad\\mbox{for }\ni\\neq k.\\nonumber\\vspace*{-2pt}\n\\end{eqnarray}\nTherefore,\n\\begin{eqnarray*} |F_{p}^k|&\\leq& (e\\cdot x)^+ +1+\\sum_{i}|l_i\\lam\n+\\mu_i x_i|\\\\[-2pt]\n&\\leq&\nI\\kappa\\sqrt{n}\\log^mn+1+I\\max_{i}\\bigl(|l_i\\lam|+\\mu_i\\kappa\\sqrt\n{n}\\log^m n\\bigr),\\vspace*{-2pt}\\vadjust{\\goodbreak}\n\\end{eqnarray*}\nwhere we used the simple observation that $f(e\\cdot x)\\leq(e\\cdot\nx)^++1$. Clearly, we can choose $\\varepsilon_2$ so that $|F_p^k|\\leq\n\\varepsilon_2\\varepsilon_0\\sqrt{n}\\log^m n$. Also $F_z^k=-\\gamma$\nand \\mbox{$F_{zx}=0$} so that by re-choosing $\\varepsilon_2$ large enough we\nhave $\\max\\{|F^k_z[y]|,|F^k_{zx}[y]|\\}\\leq\\varepsilon_2\\varepsilon\n_0\\sqrt{n}\\log n$. Finally, by (\\ref{eqFijDeriv}) we have that\n\\begin{eqnarray*}\nF^k_{r_{ij}x_l}&=&0 \\qquad\\mbox{for } i\\neq j,\\\\\nF^k_{r_{ii}x_j}&=&0 \\qquad\\mbox{for } i\\neq k, i\\neq\nj,\\\\\nF^k_{r_{ii}x_i}&=&\\frac{1}{2}\\mu_i \\qquad\\mbox{for } i\\neq k,\\\\\nF^k_{r_{ii}x_i}&=&\\frac{1}{2}\\mu_i \\qquad\\mbox{for } i\\neq\nk,\\\\\nF^k_{r_{kk}x_k}&=&\\frac{1}{2}\\mu_k-\\frac{1}{2}\\,\\frac{\\partial\n}{\\partial x_k}f(e\\cdot x),\\\\\nF^k_{r_{kk}x_j}&=&\\frac{1}{2}\\,\\frac{\\partial}{\\partial x_k}f(e\\cdot\nx) \\qquad\\mbox{for } j\\neq k.\n\\end{eqnarray*}\nThus,\n\\[\n|F_{rx}^k|^2\\leq\\sum_{l\\in\\I}\\frac{1}{2}\\biggl|\\frac{\\partial\n}{\\partial x_l}f(e\\cdot x)\\biggr|^2+\\frac{1}{2}\\mu_{\\max}\\leq\\frac\n{1}{2}(1+\\mu_{\\max}),\n\\]\nwhere we used the fact that $f(\\cdot)$ is continuously differentiable\nwith Lipschitz constant $1$ (independently of $a$). Finally,\n\\[\nF_{x_i}^k=\\frac{\\partial}{\\partial x_i}f(e\\cdot x)\\biggl(c_k+\\mu\n_kp_k-\\frac{1}{2}\\mu_kr_{kk}\\biggr)-\\mu_ip_i+\\frac{1}{2}\\mu_ir_{ii},\n\\]\nso that\n\\begin{equation}\\label{eqFx}\n|F_{x_i}^k|\\leq|c_k|+\\mu_k|p|+\\tfrac{1}{2}\\mu\n_k|r|+\\mu\n_i|p|+\\tfrac{1}{2}\\mu_i |r|.\n\\end{equation}\nAlso, note that\n\\[\nF_{x_ix_j}^k=\\frac{\\partial}{\\partial x_i\\,\\partial x_j}f(e\\cdot\nx)\\biggl(c_k+p_k-\\frac{1}{2}r_{kk}\\biggr),\n\\]\nso that\n\\[\nF_{x_ix_j}^k=\\cases{\n2\\bigl[c_k+\\mu_kp_k-\\frac{1}{2}\\mu_kr_{kk}\\bigr], &\\quad\nif $|e\\cdot x|\\leq\\frac{1}{4}$,\\vspace*{2pt}\\cr\n0, &\\quad otherwise.}\n\\]\nCombining the above gives\n\\[\n|F_{xx}^k|\\leq\\varepsilon_2\\varepsilon_0(1+|p|+|r|)\n\\]\nfor suitably\\vspace*{1pt} redefined $\\varepsilon_2$ which concludes the proof that\nthe conditions (\\ref{eqFcond1})--(\\ref{eqFcond3}) hold with $\\bar\n{\\Lambda}=\\varepsilon_1n$, $\\underbar{\\Lambda}=\\varepsilon_0n$\nand $\\eta=\\varepsilon_2$. Having verified these conditions, the\nexistence and uniqueness of the solution $\\phi_{\\kappa,a}^n$ to\n(\\ref{eqHJB2}) now follows from Theorem~17.18 in \\cite{TandG}.\n\nTo obtain the gradient estimates in (\\ref{eqgradients})\nwe first outline how the solution~$\\phi\n_{k,a}^n$ is obtained in \\cite{TandG} as a limit of solutions to\nsmoothed equations (we refer the reader to~\\cite{TandG}, page 466, for\nthe more elaborate description). To that end, let\n$F_a^i$ be as defined in (\\ref{eqFdefin}) and for $y\\in\\Gamma$ define\n\\begin{equation}\\label{eqGhPDE}\nF^h[y]=G_{h}(F^1_a[y],\\ldots,F^I_a[y]),\n\\end{equation}\nwhere\n\\[\nG_h(y)=h^{-I}\\int_{\\bar{y}\\in\\bbR^I}\\rho\\biggl(\\frac{y-\\bar\n{y}}{h}\\biggr)G_0(\\bar{y})\\,d\\bar{y}\n\\]\nand $G_0(x)=\\min_{i\\in\\I}x_i$ and $\\rho(\\cdot)$ is a mollifier on\n$\\bbR^I$ (see \\cite{TandG}, page 466). $F^h$ satisfies all the bounds\nin (\\ref{eqFcond1})--(\\ref{eqFcond3}) uniformly in $h$; cf. \\cite\n{TandG}, page 466. Then, there exists a unique solution $u^h$ for the equations\n\\begin{equation} \\label{eqGhPDE2}\nF^h[u^h]=0\n\\end{equation}\non $\\mB_{\\kappa}^n$ with\n$u^h=0$ on $\\partial\\mB_{\\kappa}^n$.\n\nThe solution $\\phi_{\\kappa,a}^n$ is now obtained as a limit of $\\{\nu^h\\}$ in the space $C_*^{2,\\alpha}(\\mB)$ as defined in (\\ref\n{eqCstar}). Moreover, since the gradient bounds are shown in \\cite\n{TandG} to be independent of $h$, it suffices for our purposes to fix\n$h$ and focus on the construction of the gradient bounds.\n\nOur starting point is the bound at the bottom of page 461 of \\cite\n{TandG} by which\n\\begin{equation}\\label{eqonemoreinterim}\n|u^h|^*_{2,\\alpha,\\mB_{\\kappa}^n}\\leq\\check\n{C}(a,n)(1+|u^h|^*_{2,\\mB_{\\kappa}^n}),\n\\end{equation}\nwhere\\vspace*{-1pt} $|u^h|^*_{2,\\mB_{\\kappa}^n}=\\sum_{j=0}^2 [u^h]_{j,\\mB}^*$ and\n$[\\cdot]_{j,\\mB}^*, j=0,1,2$, are as defined in Section~\\ref{secADCP}.\nThe constant $\\alpha(a,n)$ depends only on the number of\nclasses $I$ and on $\\bar{\\Lambda}\/\\underbar{\\Lambda}$ (see~\\cite\n{TandG}, top of page 461) and this fraction equals, in our context, to\n$\\varepsilon_1\/\\varepsilon_0$ and is thus constant and independent of\n$n$ and $a$.\n\nWe will\\vspace*{1pt} address the constant $\\check{C}(a,n)$ shortly. We first argue\nhow one proceeds from (\\ref{eqonemoreinterim}). Fix $0<\\delta< 1$,\nlet $\\epsilon=\\delta\/\\check{C}(a,n)$ and $C(\\epsilon)=2\/(\\epsilon\n\/8)^{1\/\\alpha}$ (see \\cite{TandG}, top of page 132). Then, applying\nan interpolation inequality (see~\\cite{TandG}, bottom of page 461 and\nLemma 6.32 on page 130), it is obtained that\n\\[\n|u^h|^*_{2,0,\\mB_{\\kappa}^n}\\leq C(\\epsilon)|u^h|^*_{0,\\Omega\n}+\\epsilon|u^h|^*_{2,\\alpha,\\mB_{\\kappa}^n}.\n\\]\nPlugging this back into (\\ref{eqonemoreinterim}) one then has\n\\[\n|u^h|^*_{2,\\alpha,\\mB_{\\kappa}^n}\\leq\\check{C}(a,n)\\biggl(1+\\bar\n{C}\\check{C}(a,n)^{1\/\\alpha}|u^h|^*_{0,\\mB_{\\kappa}^n}\n+\\frac{\\delta}{\\check{C}(a,n)}|u^h|^*_{2,\\alpha,\\mB_{\\kappa\n}^n}\\biggr)\n\\]\nfor a constant $\\bar{C}$ that depends only on $\\delta$ and $\\alpha$.\nIn turn,\n\\[\n|u^h|^*_{2,\\alpha,\\mB_{\\kappa}^n}\\leq\\bar{C}(\\check\n{C}(a,n))^{1+1\/\\alpha}|u^h|^*_{0,\\mB_{\\kappa}^n}\n\\]\nfor a constant $\\bar{C}$ that\\vadjust{\\goodbreak} does not depend on $a$ or $n$.\n\nHence, to obtain the required bound in (\\ref{eqgradients})\nit remains only to\\break bound~$\\check{C}(a,n)$. Following \\cite{TandG},\nbuilding on equation (17.51) of \\cite{TandG}, $\\check{C}(a,n)$ is the\n(minimal) constant\nthat satisfies\n\\begin{equation}\\label{eqCandefin}\nC(1+M_2)(1+\\tilde{\\mu}R_0+\\bar{\\mu}R_0^2)\\leq\n\\check{C}(a,n)(1+|u^h|^*_{2,\\mB}),\n\\end{equation}\nwhere (as stated in \\cite{TandG}, bottom of page 460) the (redefined)\nconstant $C$ depends only on the number of class $I$ and on $\\bar\n{\\Lambda}\/\\underbar{\\Lambda}=\\varepsilon_1\/\\varepsilon_0$.\nThe constants~$\\tilde{\\mu}$ and~$\\bar{\\mu}$ are defined in \\cite{TandG}\nand we will explicitly define them shortly. Here one should not\nconfuse $\\bar{\\mu}$ with the average service rate in our system. In\nwhat follows $\\bar{\\mu}$ will only be used as the constant in \\cite\n{TandG}. We now bound constants~$\\tilde{\\mu}$ and~$\\bar{\\mu}$.\nThese are defined by\n\\begin{eqnarray*}\n\\tilde{\\mu}&=&\\frac{D_0}{\\underbar{\\Lambda\n}(1+M_2)},\\qquad\n\\bar{\\mu} =\\frac{C(I)}{\\underbar{\\Lambda}}\n\\biggl(\\frac{A_0^2}{\\underbar{\\Lambda}\\epsilon}+\\frac{B_0}{1+M_2}\\biggr),\n\\\\\nD_0&=&\\sup_{x,y\\in\\mB}\\{|F^h_x(y,u^h(y),Du^h(y),D^2u^h(x))|\n\\\\\n&&\\hphantom{\\sup_{x,y\\in\\mB}\\{}{}+|F^h_z(y,u^h(y),Du^h(y),D^2u^h(x))||Du^h(y)|\\\\\n&&\\hphantom{\\sup_{x,y\\in\\mB}\\{}{}+|F^h_p(y,u^h(y),Du^h(y),D^2u^h(x))||D^2u^h(y)|\\},\n\\\\\nA_0&=&\\sup_{\\mB}\\{|F^h_{rx}|+|F^h_p|\\}, \\\\\nB_0&=& \\sup_{\\mB} \\{|F_{px}||D^2 u^h|+ |F_z||D^2\nu^h|+|F_{zx}||Du^h|+|F_{xx}|\\},\n\\end{eqnarray*}\nwhere $C(I)$ is a constant that depends only on the number of classes\n$I$, $\\epsilon\\in(0,1)$ is arbitrary and fixed (independent of $n$\nand $a$) and $M_2=\\sup_{\\mB}|D^2u^h|$. The constants $\\bar{\\mu}$,\n$\\tilde{\\mu}$ and $M_2$ are defined in \\cite{TandG}, pages 456--460,\nand $A_0$ and~$B_0$ are as on page 461 there.\n\nWe note that $F^h_z$ is a constant, $F^h_p$ is bounded by $\\bar\n{C}\\sqrt{n}\\log^m n$ for some constant $\\bar{C}$ [see (\\ref\n{eqgradients})] that depends only on $\\kappa$ and, by (\\ref{eqFx}),\n$|F_x^h|\\leq\\varepsilon_2\\varepsilon_0(1+|p|+|r|)$. In turn,\n$D_0\\leq4\\varepsilon_2\\varepsilon_0\\sqrt{n}\\log^m n\\sup_{\\mB\n}(1+|Du^h|+|D^2u^h|)$. Arguing similarly for $A_0$ and $B_0$ we find\nthat there exists a constant $\\bar{C}$ (that does not depend on $n$\nand $a$) such that\n\\[\nA_0\\leq\\bar{C}\\sqrt{n}\\log^mn\\quad\\mbox{and}\\quad B^0\\leq\\bar{C} \\sup\n_{\\mB}(1+|Du^h|+|D^2u^h|),\n\\]\nwhich in turn implies the existence of a redefined constant $\\bar{C}$\nsuch that\n\\[\n\\tilde{\\mu}\\leq\\frac{\\bar{C}\\log^{m} n}{\\sqrt{n}(1+M_2)}\\sup\n_{\\mB}(1+|Du^h|+|D^2u^h|)\n\\]\nand\n\\[\n\\bar{\\mu}\\leq\\frac{\\bar{C}\\log^{2 m} n}{n}+\\frac{\\bar\n{C}}{n(1+M_2)}\\sup_{\\mB}(1+|Du^h|+|D^2u^h|).\n\\]\nThe proof of the bound is concluded by plugging these back into (\\ref\n{eqCandefin}) and setting $R_0=\\kappa\\sqrt{n}\\log^m n$ there to get\nthat\n\\[\n\\check{C}(a,n)\\leq C\\log^{4m(1+{1}\/{\\alpha})}n\n\\]\nfor some $C$ that does not depend on $a$ and $n$.\n\nThe constant $\\tilde{C}$ on the right-hand side of (\\ref{eqgradients})\n(which can depend on $n$ but does not depend on $a$) is\nargued as in the proof of Theorem 17.17 in~\\cite{TandG} and we\nconclude the proof by noting that the global Lipschitz constant (that\nwe allow to depend on~$n$) follows from Theorem 7.2 in\n\\cite{trudinger1983fully}.\n\nWe next turn to proof of Theorem \\ref{thmSSC}. First, we will\nexplicitly construct the queueing process under the $h$-tracking policy\nand state a lemma that will be of use in the proof of the theorem.\nDefine $A_i^n(t)=\\mN_i^a(\\lambda_i^n t)$ so that~$A_i^n$ is the\narrival process of class-$i$ customers. Given a ratio control $U^n$ and\nthe associated queueing process $\\mathbb{X}^n=(X^n,Q^n,Z^n,\\check{X}^n)$,\n$\\check{W}^n$ is as defined in~(\\ref{eqWtildedefin}). Also, we define\n\\[\nD^n(t)=\\sum_{i\\in\\I}\\mN_i^d\\biggl( \\mu_i\\int_0^t Z_i\\lam(s)\\,ds\n\\biggr).\n\\]\nThat is, $D^n(t)$ is the total number of service completions by time\n$t$ in the $n$th system.\n\nFor the construction of the queueing process under the tracking policy\nwe define a family of processes\n$\\{\\mathcal{A}_{i,\\mH}^n, i\\in\\I,\\mathcal{H}\\subset\\I\\}$ as follows:\nlet $\\{\\xi_{\\mK}^l; l\\in\\bbZ_+,\\mK\\subset\\I\\}$ be a family of i.i.d\nuniform $[0,1]$ random variables independent of\n$\\bar{\\mathcal{F}}_{\\infty}$ as defined in~(\\ref{eqcheckFdefin}). For\neach $\\mathcal{K}\\subset\\I$, define the processes\n$(\\mathcal{A}_{i,\\mH}^n, i\\in\\I)$ by\n\\begin{equation}\\label{eqmAdefin}\n\\mathcal{A}_{i,\\mH}^n(t)=\\sum_{l=1}^{D^n(t)}1\\biggl\\{\\frac{\\sum_{k<\ni,k\\in\\mH}\\lambda_k}{1\\vee\\sum_{k\\in\\mH}\\lambda_k} < \\xi_{\\mK\n}^l\\leq\n\\frac{\\sum_{k\\leq i,k\\in\\mH}\\lambda_k}{1\\vee\n\\sum_{k\\in\\mH}\\lambda_k}\\biggr\\}.\n\\end{equation}\n\nWe note that for any strict subset $\\mH\\subset\\I$ and $i\\in\\mK$,\nthe probability that a jump of $D^n(t)$ results in a jump of $\\mathcal\n{A}_{i,\\mH}^n$ is equal to $\\lambda_i^n\/\\sum_{k\\in\\mH}\\lambda\n_k^n=a_i\/\\sum_{k\\in\\mH}a_k$ and is strictly greater than $\\lambda\n_i^n\/\\sum_{k\\in\\I}\\lambda_k^n=a_i$. We define\n\\begin{equation}\\label{eqepsilondefin}\n\\epsilon_i=\\min\n_{\\mathcal{H}\\subset\\I}a_i-\\frac{a_i}{\\sum_{k\\in\\mH}a_k},\n\\end{equation}\nand note that $\\epsilon_i>0$ by our assumption\nthat $a_i>0$ for all $i\\in\\I$ (see Section~\\ref{secmodel}). Let\n$\\bar{\\epsilon}=\\min_{i}\\epsilon_i\/4$.\n\nNote that at time intervals in which $i\\in\\mK(\\cdot)=\\mH$ (see\nDefinition \\ref{defintracking}) for some $\\varnothing\\neq\\mH\\subset\n\\I$, the process $\\mathcal{A}_{i,\\mH}^n$ jumps with probability\n$\\lambda_i^n\/\\sum_{k\\in\\mH}\\lambda_k$ whenever a server becomes\navailable (i.e., upon a jump of $D^n$). In turn, we will use the\nprocesses $\\{\\mathcal{A}_{i,\\mH}^n, i\\in\\I,\\mathcal{H}\\subset\\I\\}\n$ to generate (randomized) admissions to service of class-$i$ customers\nunder the $h$-tracking policy.\n\nMore specifically, under the $h$-tracking policy (see Definition \\ref\n{defintracking}) a customer from the class-$i$ queue enters service in\nthe following events:\n\\begin{longlist}\n\\item A class-$i$ customer that arrives at time $t$ enters\nservice immediately if there are idle servers, that is, if $(e\\cdot\n\\check{X}\\lam(t-))^{-}>0$.\n\\item If a server becomes available at time $t$ (corresponding\nto a jump of~$D^n$) and $t$ is such that $i\\in\\mathcal{K}(t-)=\\mH\n\\subset\\I$, then a customer from the class-$i$ queue is admitted to\nservice at time $t$ with probability $\\lambda_i^n\/\\sum_{k\\in\\mH\n}\\lambda_k$. This admission to service corresponds to a jump of the\nprocess $\\mathcal{A}_{i,\\mH}^n$ as defined in (\\ref{eqmAdefin}).\n\\item If a server becomes available at time $t$ (corresponding\nto a jump of~$D^n$) and $t$ is such that $\\mathcal{K}(t-)=\\varnothing$\nand $i=\\min\\{k\\in\\I\\dvtx Q_i^n(t)>0\\}$, then a~class-$i$ customer is\nadmitted to service.\n\\end{longlist}\nFormally, the queueing process $\\mathbb{X}\\lam=(X\\lam,Q\\lam,Z\\lam\n,\\check{X}\\lam)$ satisfies\n\\begin{eqnarray*} Z_i\\lam(t)&=&Z_i\\lam(0)+\\int_0^t 1\\bigl\\{\\bigl(e\\cdot\n\\check{X}^n(s)\\bigr)^->0\\bigr\\}\\,dA_i^n(s)\\\\\n&&{}+\\sum_{\\mH\\subset\\I}\\int\n_0^t1\\{i\\in\\mathcal{K}(s-),\\mathcal{K}(s-)=\\mathcal{H}\\}\\,\nd\\mathcal{A}_{i,\\mH}\\lam(s)\\\\\n&&{}+ \\int_0^t 1\\bigl\\{\\mK\n(s-)=\\varnothing,i=\\min\\{k\\in\\I\\dvtx Q_k^n(s-)>0\\}\\bigr\\}\\,dD^n(s)\\\\\n&&{}-\\mN\n_i^d\\biggl( \\mu_i\\int_0^t Z_i\\lam(s)\\,ds \\biggr),\\qquad i\\in\\I,\\\\\nX_i\\lam(t)&=&X_i\\lam(0)+A_i^n(t)-\\mN_i^d\\biggl(\\mu_i \\int_0^t\nZ_i\\lam(s) \\,ds \\biggr),\\qquad i\\in\\I, \\\\ Q_i\\lam(t)&=&X_i\\lam(t)-Z_i\\lam\n(t),\\qquad i\\in\\I.\n\\end{eqnarray*}\n\nThe second, third and fourth terms on the right-hand side of the\nequation for $Z_i^n$ correspond, respectively, to the events described\nby items (i)--(iii) above. Finally, $\\check{X}\\lam$ is defined from\n$X\\lam$ as in (\\ref{eqtildeXdefin}). The fact that the above system\nof equations has a unique solution is proved by induction on arrival\nand service completions times (see, e.g., the proof of Theorem 9.2 of\n\\cite{MMR98}). Clearly, $\\mathbb{X}^n$ satisfies (\\ref\n{eqdynamics2})--(\\ref{eqnon-negativity2}) with $U_i\\lam$ there\nconstructed from $Q\\lam$ using~(\\ref{eqUQmap}).\n\nWe note that, with this construction, the tracking policy is admissible\nin the sense of Definition \\ref{definadmissiblecontrols}. Also, it\nwill be useful for the proof of Theorem~\\ref{thmSSC} to note that\nwith this construction, if $[s,t]$ is an interval such that \\mbox{$i\\in\n\\mathcal{K}(u)\\subset\\I$} for all $u\\in[s,t]$ then\n\\begin{equation}\\label{eqQtrack}\\qquad\nQ_i^n(t)-Q_i^n(s)=A_i^n(t)-A_i^n(s)-\\sum_{\\mH\\subset\n\\I}\\int\n_s^t1\\{\\mathcal{K}(u-)=\\mathcal{H}\\}\\,d\\mathcal{A}_{i,\\mH\n}\\lam(u).\n\\end{equation}\n\nBefore proceeding to the proof of Theorem \\ref{thmSSC} the following\nlemma provides preliminary bounds for arbitrary ratio controls.\n\\begin{lem}\\label{lemstrongappbounds}\nFix $\\kappa,T>0$ and a ratio control $U^n$, let $\\mathbb\n{X}^n=(X^n,Q^n,\\allowbreak Z^n,\\check{X}^n)$ be the associated queueing process\nand define\n\\[\n\\tau_{\\kappa,T}^n=\\inf\\{t\\geq0\\dvtx\\check{X}^n(t)\\notin\\mB_{\\kappa\n}^n\\}\\wedge T\\log n.\n\\]\nThen, there exist constants $C_1,C_2,K_0>0$ (that depend on $T$ and\n$\\kappa$ but that do not depend on $n$ or on the ratio control $U^n$)\nsuch that for all $K>K_0$ and all $n$ large enough,\n\\begin{eqnarray}\\label{eqstrapp1}\n&&\\Pd\\Bigl\\{\\sup_{0\\leq t\\leq2T\\log n}|\\check\n{W}^n(t)|> K\\sqrt\n{n}\\log n\\Bigr\\}\\nonumber\\\\[-8pt]\\\\[-8pt]\n&&\\qquad\\leq C_1e^{-C_2K\\log n},\\nonumber\\\\\n\\label{eqstrapp2}\n&&\\Pd\\bigl\\{|\\check{X}^n(t)-\\check{X}^n(s)|> \\bigl((t-s)+(t-s)^2\n\\bigr)K\\sqrt{n}\\log n+K\\log n,\n\\nonumber\\\\\n&&\\hspace*{163pt}\n\\mbox{ for some } s \\bar{\\epsilon}n(t-s)+K\\log n\\nonumber\\\\\n&&\\hspace*{139.3pt} \\mbox{ for some }\ns \\bar{\\epsilon}n(t-s)+K\\log n\\nonumber\\\\\n&&\\hspace*{174pt}\\mbox{ for some } s K\\sqrt{\\log n}\\biggr\\}\\leq C_1e^{-C_2 K\\log n}.\n\\]\nSince the number of intervals considered is of the order of $n\\log n$,\nthe bound follows with redefined constants $C_1$ and $C_2$.\n\\end{pf}\n\n\\subsection*{\\texorpdfstring{Proof of Theorem \\protect\\ref{thmSSC}}{Proof of Theorem 5.1}} Since $\\kappa$\nis fixed throughout we use $h^n(\\cdot)=h_{\\kappa}^{*,n}(\\cdot)$. As\nin the statement of the theorem, let\n\\[\n\\psi^n(x,u)=L(x,u)+A^n_u\\phi_{\\kappa}^n(x)-\\gamma\\phi_{\\kappa\n}^n(x)\\qquad \\mbox{for } x\\in\\mB_{\\kappa}^n, u\\in\\mathcal{U}\n,\\vadjust{\\goodbreak}\n\\]\nso that by the definition of $A^n_u(x)$ we have\n\\begin{eqnarray}\\qquad\n\\psi^n(x,u)&=&-\\gamma\\phi_{\\kappa}^n(x)\\nonumber\\\\\n&&{} + (e\\cdot\nx)^+\\cdot\\sum_{i\\in\\I}u_i\\biggl\\{ c_i+\\mu_i(\\phi_{\\kappa\n}^n)_i(x)-\\mu_i\\frac{1}{2}(\\phi_{\\kappa}^n)_{ii}(x)\\biggr\\}\n\\\\\n&&{}+\\sum_{i\\in\\I} (l_i\\lam-\\mu_ix_i)(\\phi_{\\kappa}^n)_i(x)\n+\\frac{1}{2}\\sum_{i\\in\\I} \\bigl(\\lambda_i^n+\\mu_i(\\nu\n_in+x_i)\\bigr)(\\phi_{\\kappa}^n){ii}(x).\\nonumber\n\\end{eqnarray}\nDefining, as before,\n\\[\nM_{i}^n(z)=c_{i}+\\mu_i(\\phi_{\\kappa}^n)_{i}(z)-\\tfrac{1}{2}\\mu\n_i(\\phi_{\\kappa}^n)_{ii}(z),\n\\]\nwe have that\n\\[\n\\psi^n(x,u)-\\psi^n(x,v)=(e\\cdot x)^+\\biggl(\\sum\n_{i\\in\\I}v_iM_i^n(x)-\\sum_{i\\in\\I}u_iM_i^n(x)\\biggr).\n\\]\nLet $U^n$ be the ratio control associated with the $h^n$-tracking\npolicy, let $\\mathbb{X}^n=(X^n,Q^n,Z^n,\\check{X}^n)$ be the\nassociated queueing process and define\n\\begin{eqnarray}\\label{eqpsicheckdefin}\n\\check{\\psi}^n(s)&=&\\psi^n(\\check\n{X}^n(s),U^n(s))-\\psi^n(\\check{X}^n(s),h^n(\\check{X}^n(s)))\\nonumber\n\\\\\n&=&\n\\bigl(e\\cdot\\check{X}^n(s)\\bigr)^+\\sum_{i\\in\\I}h_i^n(\\check\n{X}^n(s))M_i^n(\\check{X}^n(s)) \\\\\n&&{}-\\bigl(e\\cdot\n\\check{X}^n(s)\\bigr)^+ \\sum_{i\\in\\I}U_i^n(s)M_i^n(\\check\n{X}^n(s)).\\nonumber\n\\end{eqnarray}\nRecall that, by construction, $Q_i^n(s)=(e\\cdot\\check\n{X}^n(s))^+U_i^n(s)$ so that (\\ref{eqpsicheckdefin}) can be\nre-written as\n\\begin{eqnarray*} \\check{\\psi}^n(s)&=&\\psi^n(\\check\n{X}^n(s),U^n(s))-\\psi^n(\\check{X}^n(s),h^n(\\check{X}^n(s)))\n\\\\&=&\n\\bigl(e\\cdot\\check{X}^n(s)\\bigr)^+\\sum_{i\\in\\I}h_i^n(\\check\n{X}^n(s))M_i^n(\\check{X}^n(s)) \\\\\n&&{}-\\sum_{i\\in\\I\n}Q_i^n(s)M_i^n(\\check{X}^n(s)).\n\\end{eqnarray*}\nThe theorem will be proved if we show that\n\\begin{equation}\\label{eqwhatneed}\n\\Ex\\biggl[\\int_0^{\\tau_{\\kappa',T}^n}e^{-\\gamma s}\n|\\check\n{\\psi}^n(s)|\\,ds\\biggr]\\leq C\\log^{k_0+3}n.\n\\end{equation}\nTo\nthat end, define a sequence of times $\\{\\tau_{l}^n\\}$ as follows:\n\\[\n\\tau_{l+1}^n=\\inf\\{t> \\tau_l^n\\dvtx h^n(\\check{X}^n(t))\\neq h^n(\\check\n{X}^n(\\tau_l^n))\\}\\wedge\\tau_{\\kappa',T}^n \\qquad\\mbox{for } l\\geq0,\n\\]\nwhere $\\tau_0^n=\\eta^n\\wedge\\tau_{\\kappa',T}^n$ and\n\\begin{equation}\\label{eqetandefin}\n\\eta\n^n=t_0\\frac{\\log^mn}{\\sqrt{n}}\n\\end{equation}\nfor\\vspace*{1pt}\n$t_0=4\\kappa\/\\epsilon_i$ with $\n\\epsilon_i=\\min_{\\mathcal{H}\\subset\\I}a_i-\\frac{a_i}{\\sum_{k\\in\n\\mH}a_k}$ as in (\\ref{eqepsilondefin}). Finally, we define $r^n=\\sup\n\\{l\\in\\bbZ_+\\dvtx \\tau_{l}^n\\leq\\tau_{\\kappa',T}^n\\}$ and set $\\tau\n_{r^n+1}^n=\\tau_{\\kappa',T}^n$. We then have\n\\begin{eqnarray*}\n&&\\int_0^{\\tau\n_{\\kappa',T}^n}e^{-\\gamma s} |\\check{\\psi}^n(s)|\\,ds\\\\\n&&\\qquad=\\sum\n_{l=1}^{r^n+1} \\int_{\\tau_{l-1}^n}^{\\tau_l^n}e^{-\\gamma s} |\\check\n{\\psi}^n(s)|\\,ds\\\\\n&&\\qquad=\\sum_{l=1}^{r^n+1}\\biggl(\\int_{\\tau\n_{l-1}^n}^{(\\tau_{l-1}^n+\\eta^n)\\wedge\\tau_{l}^n}e^{-\\gamma s}\n|\\check{\\psi}^n(s)|\\,ds+\\int_{\\tau_{l-1}^n+\\eta^n}^{\\tau_l^n\\vee\n(\\tau_{l-1}^n+\\eta^n) }e^{-\\gamma s} |\\check{\\psi}^n(s)|\\,ds\n\\biggr).\n\\end{eqnarray*}\nThe proof is now divided into three parts. We will show that,\nunder the conditions of the theorem,\n\\begin{eqnarray}\\label{eqSSC1}\n\\Ex\\Bigl[\\sup_{1\\leq l\\leq\nr^n+1}\\sup_{\\tau_{l-1}^n\\leq s< (\\tau_{l-1}^n+\\eta^n)\\wedge\\tau\n_l^n}|\\check{\\psi}^n(s)|\\Bigr]&\\leq& C\\log^{k_0+2}n,\n\\\\\n\\label{eqSSC2}\n\\Ex\\Bigl[\\sup_{1\\leq l\\leq r^n+1}\\sup_{(\\tau\n_{l-1}^n+\\eta\n^n)\\leq s< \\tau_{l}\\vee(\\tau_{l-1}^n+\\eta^n)}\n|\\check{\\psi}^n(s)|\\Bigr]&\\leq& C\\log^{k_0+2} n,\n\\end{eqnarray}\nwhere we define $\\sup_{(\\tau_{l-1}^n+\\eta^n)\\leq s< \\tau_{l}^n\\vee\n(\\tau_{l-1}^n+\\eta^n)}\n|\\check{\\psi}^n(s)|=0$ if $\\tau_l^n\\leq\\tau_{l-1}^n+\\eta^n$.\nFinally, we will show that\n\\begin{equation}\\label{eqSSC3}\n\\Ex\\biggl[\\int_0^{\\eta^n\\wedge\\tau_{\\kappa\n',T}^n}|\\check\n{\\psi}^n(s)\\,ds|\\biggr]\\leq C\\log^{k_0}n.\n\\end{equation}\nThe proof of (\\ref{eqSSC1}) hinges on the fact that, sufficiently\nclose to a change point~$\\tau_l^n$, all the customer classes, $i$, for\nwhich $h_i^n(\\check{X}^n(s))=1$ for some $s$ in a~neighborhood of\n$\\tau_l^n$, will have similar values of $M_i^n(\\check{X}^n(s))$. This\nwill follow from our gradient estimates for $\\phi_{\\kappa}^n$. The\nproof of (\\ref{eqSSC2}) hinges on the fact that,~$\\eta^n$ time units\nafter a change point $\\tau_l^n$ the queues of all the classes for\nwhich $h^n(\\check{X}^n(\\tau_l^n))=0$ are small because, under the\ntracking policy, these classes receive a significant share of the capacity.\n\nToward formalizing this intuition, define the following event on the\nunderlying probability space:\n\\begin{eqnarray*} \\tilde{\\Omega}(K)&=&\\bigl\\{|\\check{X}^n(t)-\\check\n{X}^n(s)|\\leq K \\sqrt{n}\\log^2 n(t-s)+K\\log n,\\\\[-0.8pt]\n&&\\hspace*{135.3pt}\n\\mbox{ for all } s4K\\log n$,\\cr\nt, &\\quad otherwise.}\n\\end{equation}\nThen, we claim that on $\\tilde{\\Omega\n}(K)$ and for all $t$ with $\\tilde{\\varsigma}_i^n(t)> \\hat{\\varsigma\n}_i^n(t)$,\n\\begin{eqnarray}\\label{eqSSCinterim}\n&&\\sup_{\\hat{\\varsigma}_i^n(t)\\leq s< \\tilde\n{\\varsigma}_i^n(t)}\\bigl|\\bigl(e\\cdot\\check{X}^n(s)\\bigr)^+U_i^n(s)-\\bigl(e\\cdot\\check\n{X}^n(s)\\bigr)^+h_i^n(\\check{X}^n(s))\\bigr|\\nonumber\\\\[-8pt]\\\\[-8pt]\n&&\\qquad\\leq12K\\log n.\\nonumber\n\\end{eqnarray}\nNote that since $h_i^n(\\cdot)\\in\\{0,1\\}$, the\nabove is equivalently written as\n\\begin{equation}\\label{eqSSCinterim2}\n\\sup_{\\hat{\\varsigma}_i^n(t)\\leq s< \\tilde\n{\\varsigma}_i^n(t)}Q_i^n(s)\\leq12K\\log n.\n\\end{equation}\nIn words,\nwhen the process $\\check{X}^n(t)$ enters a region in which\n$h^n_i(\\check{X}^n(\\cdot))=0$ the queue of class $i$ will be drained\nup to $12 K\\log n$ within at most $\\eta^n$ time units and it will\nremain there up to $\\tilde{\\varsigma}_i^n(t)$. We postpone the proof\nof (\\ref{eqSSCinterim}) and use it in proceeding with the proof of\nthe theorem.\n\nTo that end, fix $l\\geq0$ and let\n\\[\nj^*_{l}=\\min\\mathop{\\argmin}_{i\\in\\I}M_i^n(\\check{X}^n(\\tau_l^n)).\n\\]\nThen, by the definition of the function $h^n$ in (\\ref{eqixdefin}) we\nhave that $h_{j^*_l}^n(\\check{X}^n(\\tau_l^n))=1$ and $h_i(\\check\n{X}^n(\\tau_l^n))=0$ for all $i\\neq j^*_l$. In particular,\n\\begin{eqnarray*}\n\\check{\\psi}^n(s)&=&\\bigl(e\\cdot\\check\n{X}^n(s)\\bigr)^+h_{j^*_l}^n(\\check{X}^n(s))M_{j^*_l}^n(\\check{X}^n(s))\n\\\\\n&&{}-\\sum_{i\\in\\I}Q_i^n(s)M_i^n(\\check{X}^n(s))\n\\end{eqnarray*}\nfor all $s\\in[\\tau_{l}^n,(\\tau_{l}^n+\\eta^n)\\wedge\\tau_{l+1}^n)$.\nLet\n\\[\n\\mathcal{J}(\\tau_l^n)=\\{i\\in\\I\\dvtx Q_i^n(\\tau_l^n-)>4K\\log n\\}.\n\\]\nThen, simple manipulations yield\n\\begin{eqnarray}\\label{eqcheckpsiinterim}\n|\\check{\\psi}^n(s)|&\\leq&\\sum_{i\\notin\\mathcal{J}(\\tau\n_l^n)\\cup\\{j^*_l\\}}Q_i^n(s)|M_i^n(\\check{X}^n(s))|\\nonumber\\\\\n&&{}+\n|M_{j_l^*}^n(\\check{X}^n(s)|\\biggl|\\bigl(e\\cdot\\check\n{X}^n(s)\\bigr)^+-\\sum_{i\\in\\mathcal{J}(\\tau_l^n)\\cup\\{j^*_l\\}\n}Q_i^n(s)\\biggr|\\\\\n&&{}+\\sum_{i\\in\\mathcal\n{J}(\\tau_l^n)}Q_i^n(s)|M_i^n(\\check{X}^n(s))-M_{j_l^*}^n(\\check\n{X}^n(s))|.\\nonumber\n\\end{eqnarray}\nWe turn to bound each of the elements on the right-hand side of (\\ref\n{eqcheckpsiinterim}). First, note that for all $i\\notin\\mathcal\n{J}(\\tau_l^n)\\cup\\{j^*_l\\}$ it follows from (\\ref{eqSSCinterim2}) that\n\\[\n\\sup_{\\tau_{l}^n\\leq s<(\\tau_{l}^n+\\eta^n)\\wedge\\tau\n_{l+1}^n}Q_i^n(s)\\leq12K\\log n.\n\\]\nAlso, by (\\ref{eqgradients1}) we have for all $i\\in\\I$ that\n\\begin{equation}\\label{eqMboun4}\n\\sup_{0\\leq s\\leq\\tau_{\\kappa',T}^n}|M_i^n(\\check{X}^n(s))|\\leq\nC\\log^{k_1}n,\\vadjust{\\goodbreak}\n\\end{equation}\nso that\n\\begin{equation}\\label{eqitai1}\n\\sum_{i\\notin\\mathcal{J}(\\tau_l^n)\\cup\\{j^*_l\\}}Q_i^n(s)|M_i^n(\\check\n{X}^n(s))|\\leq12IKC\\log^{k_1+1}n\n\\end{equation}\nfor all $s\\in\n[\\tau_{l}^n,(\\tau_{l}^n+\\eta^n)\\wedge\\tau_{l+1}^n)$ and a constant\n$C$ that does not depend on $n$. From (\\ref{eqSSCinterim2}) and from\nthe fact that $\\sum_{i\\in\\I}Q_i^n(s)=(e\\cdot\\check{X}^n(s))^+$ we\nsimilarly have that\n\\begin{equation}\\label{eqitai2}\n|M_{j_l^*}^n(\\check{X}^n(s)|\n\\biggl|\\bigl(e\\cdot\\check\n{X}^n(s)\\bigr)^+-\\sum_{i\\in\\mathcal{J}(\\tau_l^n)\\cup\\{j^*_l\\}\n}Q_i^n(s)\\biggr|\\leq12IC K\\log^{k_1+1}n.\\hspace*{-30pt}\n\\end{equation}\nTo\nbound the last element on the right-hand side of (\\ref\n{eqcheckpsiinterim}) note that for each $i\\in\\mathcal{J}(\\tau_l^n)$\nthere exists $\\tau_l^n-\\eta^n\\leq t\\leq\\tau_{l}^n$ such that\n$h_j^n(\\check{X}^n(t))=1$. Otherwise, we would have a contradiction to\n(\\ref{eqSSCinterim}). We now claim that for each \\mbox{$i\\in\\mathcal\n{J}(\\tau_l^n)$},\n\\begin{equation}\\label{eqinterim747}\n|M_i^n(\\check{X}^n(s))-M_{j^*_l}^n(\\check\n{X}^n(s))|\\leq\\frac{C\\log^{k_1+2} n}{\\sqrt{n}}\n\\end{equation}\nfor all $s$ in $[\\tau_l^n-\\eta^n,\\tau_l^n+\\eta\n^n]$. Indeed, by the definition of $\\tilde{\\Omega}(K)$, we have that\n$|\\check{X}^n(t)-\\check{X}^n(s)|\\leq C\\log^{m+2}n$ for all $s,t$ in\n$[\\tau_l^n-\\eta^n,\\tau_l^n+\\eta^n]$. As in the proof of (\\ref\n{eqgenbound}) [see, e.g., (\\ref{eqMbound})] we have that\n\\begin{equation}\\label{eqMbound2}\n|M_i^n(x)-M_i^n(y)|\\leq\\frac{C\\log^{k_2+m+2}n}{\\sqrt{n}},\\qquad i\\in\\I,\n\\end{equation}\nfor $x,y\\in\\mB_{\\kappa'}^n$ with $|x-y|\\leq\nC\\log^{m+2}n$. In turn,\n\\begin{equation}\\label{eqMbound3}\n|M_i^n(\\check{X}^n(t))-M_i^n(\\check\n{X}^n(s))|\\leq\\frac{C\\log^{k_2+m+2} n}{\\sqrt{n}}=\\frac{C\\log\n^{k_1+2} n}{\\sqrt{n}}\n\\end{equation}\nfor all $i\\in\\I$ and all\n$s,t\\in[\\tau_l^n-\\eta^n,\\tau_l^n+\\eta^n]$ where we used the fact\nthat \\mbox{$k_1=k_2+m$}. Since, for each $j\\in\\mathcal{J}(\\tau_l^n)$, there\nexists $\\tau_l^n-\\eta^n\\leq t\\leq\\tau_{l}$ such that $h_j^n(\\check\n{X}^n(t))=1$ we have, by the definition of $h^n$ that $j\\in\\argmin\n_{i\\in\\I} M_i^n(\\check{X}^n(t))$ for such $t$ so that (\\ref\n{eqinterim747}) now follows from (\\ref{eqMbound3}). Finally, recall\nthat\\break $\\sum_{i\\in\\I}Q_i^n(t)=(e\\cdot\\check{X}^n(s))^+\\leq\\kappa\n\\sqrt{n}\\log^mn$ for all $s\\leq\\tau_{\\kappa',T}^n$ and that\n$k_0=k_1+m$ so that by~(\\ref{eqinterim747})\n\\[\n\\sum_{i\\in\\mathcal{J}(\\tau_l^n)}Q_i^n(s)|M_i^n(\\check\n{X}^n(s))-M_{j_l^*}^n(\\check{X}^n(s))|\\leq C\\log^{k_0+2}n.\n\\]\nPlugging this into (\\ref{eqcheckpsiinterim}) together with (\\ref\n{eqitai1}) and (\\ref{eqitai2}) we then have that, on $\\tilde{\\Omega}(K)$,\n\\[\n\\sup_{\\tau_{l-1}^n\\leq s<(\\tau_{l-1}^n+\\eta\n^n)\\wedge\\tau_l^n}|\\check{\\psi}^n(s)| \\leq C\\log\n^{k_1+m+2}n=CK\\log^{k_0+2}n.\n\\]\nThis argument\\vspace*{1pt} is repeated for each $l$. To complete the proof of (\\ref\n{eqSSC1}) note that, using (\\ref{eqMboun4}) together\\vspace*{-1pt} with ${\\sup\n_{0\\leq s\\leq\\tau_{\\kappa',T}^n}}|e\\cdot\\check{X}^n(s)|\\leq\\kappa\n\\sqrt{n}\\log^mn$, we have that ${\\sup_{0\\leq s\\leq\\tau_{\\kappa\n',T}^n}}|\\check{\\psi}^n(s)|\\leq C\\sqrt{n}\\log^{k_1+m}n$. Applying H\\\"\n{o}lder's inequality we have that\n\\begin{eqnarray*}\n&&\\Ex\\Bigl[\\sup_{1\\leq l\\leq\nr^n+1}\\sup_{\\tau_{l-1}^n\\leq s< (\\tau_{l-1}^n+\\eta^n)}|\\check{\\psi\n}^n(s)|\\Bigr]\\\\\n&&\\qquad\\leq\n\\Ex\\Bigl[\\sup_{1\\leq l\\leq r^n+1}\\sup_{\\tau_{l-1}^n\\leq s< (\\tau\n_{l-1}^n+\\eta^n)}|\\check{\\psi}^n(s)|1\\{\\tilde{\\Omega}(K)\\}\n\\Bigr]\\\\\n&&\\qquad\\quad{}+\n\\Ex\\Bigl[\\max_{1\\leq l\\leq r^n+1}\\sup_{\\tau_{l-1}^n\\leq s< (\\tau\n_{l-1}^n+\\eta^n)}|\\check{\\psi}^n(s)|1\\{\\tilde{\\Omega}(K)^c\\}\n\\Bigr]\\\\\n&&\\qquad\\leq\nC\\log^{k_0+2} n + C\\sqrt{n}\\log^{k_1+m}n C_1e^{-(C_2\n{K}\/{2})\\log n}\n\\end{eqnarray*}\nfor redefined constants $C_1,C_2$ and (\\ref{eqSSC1}) now follows by\nchoosing $K$ large enough.\n\nWe turn to prove (\\ref{eqSSC2}). Rearranging terms in (\\ref\n{eqpsicheckdefin}) we write\n\\[\n\\check{\\psi}^n(s) = \\sum_{i\\in\\I}M_i^n(\\check\n{X}^n(s))\\bigl(\\bigl(e\\cdot\\check{X}^n(s)\\bigr)^+h_i^n(\\check{X}^n(s))-\\bigl(e\\cdot\n\\check{X}^n(s)\\bigr)^+ U_i^n(s)\\bigr),\n\\]\nso that equation (\\ref{eqSSC2}) now follows directly from (\\ref\n{eqSSCinterim}) and (\\ref{eqMboun4}) through an application of H\\\"\n{o}lder's inequality\n\nFinally, to establish (\\ref{eqSSC3}), note that from the definition\nof $\\tau_{\\kappa',T}^n$,\n\\begin{eqnarray*}\n\\sup_{0\\leq t\\leq\\eta^n\\wedge\\tau_{\\kappa',T}^n\n} |\\check{\\psi}^n(s)|&\\leq& I\\sup_{0\\leq t\\leq\\eta^n\\wedge\\tau\n_{\\kappa',T}^n}|\\check{X}^n(t)|\\sum_{i\\in\\I}M_i^n(\\check{X}^n(t))\n\\\\&\\leq&\nI\\sup_{0\\leq t\\leq\\eta^n\\wedge\\tau_{\\kappa',T}^n}C\\log\n^{k_1}n|\\check{X}^n(t)|\\leq C \\kappa\\sqrt{n}\\log^{k_1+m}n.\n\\end{eqnarray*}\nIn turn,\n\\begin{equation}\\label{eqhowmanymoreinterims}\n\\Ex\\biggl[\\int_0^{\\tau_0^n}e^{-\\gamma t} |\\check\n{\\psi\n}^n(t)|\\,dt\\biggr]\\leq C\\log^{k_1+m}n=C\\log^{k_0}n.\n\\end{equation}\n\nWe have thus proved (\\ref{eqSSC1})--(\\ref{eqSSC3}) and to conclude\nthe proof of the theorem it remains only to establish (\\ref\n{eqSSCinterim}). To that end, let $\\check{\\varsigma}_i^n(t)$,\n$\\tilde\n{\\varsigma}_i^n(t)$ and $\\hat{\\varsigma}_i^n(t)$ be as in~(\\ref\n{eqchecktaudefin})--(\\ref{eqydefin}). Fix an interval $[l,s)\\in\n(\\hat\n{\\varsigma}_i^n(t),\\tilde{\\varsigma}_i^n(t))$ such that\n$Q_i^n(u)>2K\\log n $ for all $u\\in[l,s)$. By the\\vadjust{\\goodbreak} definition of the\ntracking policy, (\\ref{eqQtrack}) holds on this interval so that, on\n$\\omega\\in\\tilde{\\Omega}(K)$,\n\\begin{eqnarray}\\label{equntilwhen3}\nQ_i^n(l)-Q_i^n(s)&\\leq& \\bar{\\epsilon}n(t-s)\nn-\\frac{\\epsilon_i}{2}n(t-s)+2K\\log n\\nonumber\\\\[-8pt]\\\\[-8pt]\n&\\leq& -\\frac{\\epsilon_i}{4}n(t-s)+2K\\log n.\\nonumber\n\\end{eqnarray}\nEquation (\\ref{eqSSCinterim}) now follows directly from (\\ref\n{equntilwhen3}). Indeed, note for all $t\\leq\\tau_{\\kappa',T}$,\n$Q_i^n(t)\\leq(e\\cdot\\check{X}^n(t))^+\\leq|\\check{X}^n(t)|\\leq\n\\kappa\\sqrt{n}\\log^mn$. Hence, $Q_i^n(\\check{\\varsigma}_i(t))\\leq\n\\kappa\\sqrt{n}\\log^mn$. In turn, using (\\ref{equntilwhen3}) and\nassuming that $\\tilde{\\varsigma}_i^n(t)\\geq\\check{\\varsigma\n}_i^n(t)+\\eta^n$ we have that $Q_i^n(\\varsigma_{0,i}^n(t))\\leq4K\\log\nn$ for some time $\\varsigma_{0,i}^n(t)\\leq\\check{\\varsigma\n}_i^n(t)+\\eta^n$ with $\\eta^n$ as defined in (\\ref{eqetandefin}).\nAlso, let\n\\[\n\\varsigma_{2,i}^n(t)=\\inf\\{t\\geq\\varsigma_{0,i}^n(t)\\dvtx Q_i^n(t)\\geq\n12K\\log n \\}\n\\]\nand\n\\[\n\\varsigma_{1,i}^n(t)=\\sup\\{t\\leq\\varsigma_{2,i}^n(t)\\dvtx Q_i^n(t)\\leq\n8K\\log n \\}.\n\\]\nNote that (\\ref{equntilwhen3}) applies to any subinterval $[l,s)$ of\n$[\\varsigma_{1,i}^n(t),\\varsigma_{2,i}^n(t))$. In turn, $\\varsigma\n_{2,i}^n(t)\\leq\\tilde{\\varsigma}_i^n(t)$ would constitute\\vspace*{1pt} a\ncontradiction to (\\ref{equntilwhen3}) so that we must have that\n$Q_i^n(s)\\leq12K\\log n$ for all $s\\in[\\varsigma_{0,i}^n(t),\\tilde\n{\\varsigma}^n(t))$ with $\\varsigma_{0,i}^n(t)\\leq\\tilde{\\varsigma\n}^n(t)+\\eta^n$. Finally, note that $\\varsigma_{0,i}^n(t)$ can be\ntaken to be $t$ if $Q_i^n(t)\\leq4K\\log n$.\n\nThis concludes the proof of (\\ref{eqSSCinterim}) and, in turn, the\nproof of the theorem.\n\n\\subsection*{\\texorpdfstring{Proof of Lemma \\protect\\ref{lemafterstop}}{Proof of Lemma 6.2}} Let $T$,\n$\\tau_{\\kappa',T}^n$ and $(x^n,q^n)$ be as in the statement of the\nlemma. We first prove (\\ref{eqafterstop2}). To that end, we claim\nthat, for all $T$ large enough,\n\\begin{equation}\\label{eqthisisjustonemore}\n\\Ex_{x^n,q^n}\\biggl[\\int_{T\\log\nn}^{\\infty} e^{-\\gamma s} (e\\cdot c)\\bigl(e\\cdot\\check{X}^n(s)\\bigr)^+\\,ds\n\\biggr]\\leq C\\log^2n\n\\end{equation}\nfor some $C>0$ and all\n$n\\in\\bbZ$. This is a direct consequence of Lemma 3 in~\\cite{AMR02}\nthat, in our notation, guarantees that\n\\[\n\\Ex_{x^n,q^n}[|\\check{X}^n(t)|]\\leq C\\bigl(1+|x^n|+ \\sqrt{n}(t+t^2)\\bigr)\n\\]\nfor all $t\\geq0$ and some constant $C>0$. We use (\\ref\n{eqthisisjustonemore}) to prove Lemma \\ref{lemafterstop}. The\nassertion of the lemma will be established by showing that\n\\[\n\\Ex_{x^n,q^n}\\biggl[\\int_{\\tau_{\\kappa',T}^n}^{2T\\log n} e^{-\\gamma\ns} (e\\cdot c)\\bigl(e\\cdot\\check{X}^n(s)\\bigr)^+\\,ds\\biggr]\\leq C\\log^2n.\n\\]\nTo that end, applying H\\\"{o}lder's\ninequality, we have\n\\begin{eqnarray}\\label{eqinterim8}\n&&\\Ex_{x^n,q^n}\\biggl[\\int_{\\tau_{\\kappa\n',T}^n}^{2T\\log n} e^{-\\gamma s} (e\\cdot c)\\bigl(e\\cdot\\check\n{X}^n(s)\\bigr)^+\\,ds\\biggr]\\nonumber\\\\\n&&\\qquad\\leq\\Ex_{x^n,q^n}\\Bigl[(2T\\log\nn-\\tau_{\\kappa',T}^n)^+\\sup_{0\\leq t\\leq2T\\log n}(e\\cdot c)\\bigl(e\\cdot\n\\check{X}^n(t)\\bigr)^+\\,ds\\Bigr]\\nonumber\\\\[-8pt]\\\\[-8pt]\n&&\\qquad\\leq\n\\sqrt{\\Ex_{x^n,q^n}\\bigl[\\bigl((2T\\log n-\\tau_{\\kappa\n',T}^n)^+\\bigr)^2\\bigr]}\\nonumber\\\\\n&&\\qquad\\quad{}\\times\\sqrt{\n\\Ex_{x^n,q^n}\\Bigl[\\Bigl(\\sup_{0\\leq t\\leq2T\\log n}(e\\cdot\nc)\\bigl(e\\cdot\\check{X}^n(t)\\bigr)^+\\,ds\\Bigr)^2\\Bigr]}.\\nonumber\n\\end{eqnarray}\n\nUsing Lemma \\ref{lemstrongappbounds} we have that\n\\begin{equation}\\label{eqinterim9}\n\\Ex\n_{x^n,q^n}\\Bigl[\\Bigl(\\sup_{0\\leq t\\leq2T\\log n}(e\\cdot c)\\bigl(e\\cdot\n\\check{X}^n(t)\\bigr)^+\\,ds\\Bigr)^2\\Bigr]\\leq Cn\\log^6n\n\\end{equation}\nfor some $C>0$ (that can depend on $T$). Also, since\n$m\\geq3$,\n\\[\n\\Pd\\{\\tau_{\\kappa',T}^n<2T\\log n\\}\\leq\\Pd\n\\Bigl\\{\\sup_{0\\leq t\\leq2T\\log n}|\\check{X}^n(t)|> \\kappa'\\sqrt{n}\\log\n^3 n -M\\sqrt{n}\\Bigr\\}.\n\\]\nChoosing $\\kappa'$ (and in turn\n$\\kappa$ large enough) we then have, using Lemma \\ref\n{lemstrongappbounds}, that\n\\begin{equation}\\label{eqtauprobbound}\n\\Pd\\{\\tau_{\\kappa',T}^n<2T\\log n\\}\n\\leq\\frac{C}{n^2}\n\\end{equation}\nand hence, that\n\\begin{equation}\\label{eqinterim10}\n\\Ex\n_{x^n,q^n}\\bigl[\\bigl((2T\\log n-\\tau_{\\kappa',T}^n)^+\n\\bigr)^2\\bigr]\\leq C.\n\\end{equation}\nPlugging (\\ref{eqinterim9}) and (\\ref{eqinterim10}) into (\\ref\n{eqinterim8}) we\nthen have that\n\\begin{equation}\\label{eqafterstopbound1}\n\\Ex_{x^n,q^n}\\biggl[\\int_{\\tau_{\\kappa\n',T}^n}^{2T\\log n}\ne^{-\\gamma s} (e\\cdot c)\\bigl(e\\cdot\\check{X}^n(s)\\bigr)^+\\,ds\\biggr]\\leq C\\log\n^2 n.\n\\end{equation}\n\nTo conclude the proof we will show that (\\ref{eqafterstop1}) follows\nfrom our analysis thus far. Indeed,\n\\begin{eqnarray*}\n&&\\Ex[e^{-\\gamma\\tau_{\\kappa',T}^n}\\phi\n_{\\kappa}^n(\\check{X}^n(\\tau_{\\kappa',T}^n))]\\\\\n&&\\qquad\\leq\n\\Ex_{x^n,q^n}^{U}\\biggl[\\int_{\\tau_{\\kappa',T}^n}^{2T\\log\nn}e^{-\\gamma s} \\sup_{0\\leq s\\leq2T\\log n}(e\\cdot c)\\bigl(e\\cdot\\check\n{X}^n(s)\\bigr)^+ \\,ds\\biggr].\n\\end{eqnarray*}\nThe right-hand side here is bounded by $C\\log^2n$ by the same argument\nthat leads to (\\ref{eqafterstopbound1}).\n\n\\subsection*{\\texorpdfstring{Proof of Lemma \\protect\\ref{lemmartingales}}{Proof of Lemma 6.3}} Recall that\n$\\check{W}^n$ is defined by $\\check\n{W}_i^n(t)=M_{i,1}^n(t)-M_{i,2}^n(t)$, where\n\\begin{eqnarray*}\nM_{i,1}^n(t)&=&\\mN_i^a(\\lambda_i^n t)-\\lambda_i^n t,\n\\\\\nM_{i,2}^n(t)&=&\n\\mN_i^d\\biggl(\\mu_i\\int_0^t \\bigl( \\check{X}_i\\lam(s)+\\nu_i\nn-U^n_i(s)\\bigl( e\\cdot\\check{X}\\lam(s)\\bigr)^+ \\bigr) \\,ds \\biggr)\\\\\n&&{}-\\mu\n_i\\int_0^t \\bigl( \\check{X}_i\\lam(s)-U^n_i(s)\\bigl( e\\cdot\\check\n{X}\\lam(s)\\bigr)^+ \\bigr) \\,ds.\n\\end{eqnarray*}\nThe fact that each of the processes\n$M_{i,1}^n(t)$ and $M_{i,2}^n(t)$ are square integrable martingales\nwith respect to the filtration $(\\mathcal{F}_t^n)$ follows as in\nSection 3 of \\cite{pang2007martingale} and specifically as in Lemma\n3.2 there.\n\nSince, with probability 1, there are no simultaneous jumps of $\\mathcal\n{N}_i^a$ and $\\mathcal{N}_i^d$, the quadratic variation process satisfies\n\\begin{eqnarray*}\n[\\check{W}_i^n]_t&=&[M_{i,1}^n]_t+[M_{i,2}^n]_t\\\\\n&=&\\sum_{s\\leq\nt}(\\Delta M_{i,1}^n(s))^2+\\sum_{s\\leq t}(\\Delta M_{i,2}^n(s))^2,\n\\end{eqnarray*}\nwhere the last equality follows again from Lemma 3.1 in \\cite\n{pang2007martingale} (see also Example~5.65 in \\cite{vandervaart}).\nFinally, the predictable quadratic variation process satisfies\n\\begin{eqnarray*}\n\\langle\\check{W}_i^n\\rangle_t&=&\\langle\nM_{i,1}^n\\rangle\n_t+\\langle M_{i,2}^n\\rangle_t\\\\&=&\\lambda_i^nt + \\mu_i\\int_0^t\n\\bigl( \\check{X}_i\\lam(s)+\\nu_i n-U^n_i(s)\\bigl( e\\cdot\\check{X}\\lam\n(s)\\bigr)^+ \\bigr) \\,ds\\\\&=&\n\\int_0^t (\\sigma_i^n(\\check{X}^n(s),U^n(s)))^2\\,ds,\n\\end{eqnarray*}\nwhere the second equality follow again follows from Lemma 3.1 in \\cite\n{pang2007martingale} and the last equality from the definition of\n$\\sigma_i^n(\\cdot,\\cdot)$ [see (\\ref{eqnbmdefn3})]. By Theorem\n3.2 in~\\cite{pang2007martingale} $((\\check{W}_i^n(t))^2-[\\check\n{W}_i^n]_t,t\\geq0])$ and $((\\check{W}_i^n(t))^2-[\\check\n{W}_i^n]_t,t\\geq0)$ are both martingales with respect to\n$(\\mathcal{F}_t^n)$. In turn, by the optional stopping theorem so are\nthe processes $\\mathcal{M}_i^n(\\cdot)$ and $\\mathcal{V}_i^n(\\cdot)$\nas defined in the statement of the lemma. Finally, it is easy to verify\nthat these are square integrable martingales using the fact the time\nchanges are bounded for all finite $t$.\n\\end{appendix}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe third $r$-process peak at $A\\sim195$, the platinum peak, seems particularly sensitive to both the nuclear physics input and the conditions of the stellar environment as can be seen in the detailed sensitivity study reported in Ref.~\\cite{Arcones11}. Radioactive Ion Beam (RIB) facilities have enabled the production and measurement of several waiting-point nuclei directly on the $r$-process path from $N=50$ to $N=82$. However all of the nuclei that lead directly to the formation of the third $r$-process abundance peak still remain in the region of the {\\it terra incognita}. To a large extent this is due to the very low production cross sections, the limited primary beam intensities available at present RIB facilities and also the challenging experimental conditions of large backgrounds and very low production rates. This contribution focuses on the impact of half-lives and beta-delayed neutrons on the formation of the third $r$-process peak, and how present theoretical models compare with the experimental data available.\n\n\n\\section{The r-process paradigm around N=126}~\\label{sec:hl}\n\nThe aim of this section is to describe briefly the latest experimental and theoretical results in this mass region, and to explore the impact of the present uncertainties (or discrepancies) on the nucleosynthesis of the $r$ process. On the experimental side, special focus will be made on the latest experiment performed at GSI, thus providing an update on the present status of the data analysis and its future perspective. \n\nAs mentioned above, the nuclei which are directly in the path of the $r$ process along $N=126$, approximately from gadolinium to tantalum, could not be accessed experimentally yet. Nevertheless, one can try to measure neutron-rich nuclei in the higher $Z$ neighbourhood, on both sides of the $N=126$ shell closure, and use such information as a benchmark for theoretical models. In turn, such models can be applied more reliably to extrapolate the decay properties of the $r$-path nuclei.\nThree independent experiments in the neutron-rich region around $N=126$ have been carried out recently at the GSI facility for heavy ion research (Germany). The FRS fragment separator~\\cite{Geissel92} was used for selection and identification of the ions of interest. The high energy available for primary beams at GSI, of up to 1~GeV\/u, has an advantage over other facilities in order to reduce ion identification difficulties related to the charged states of the secondary fragments. Fragmentation of primary $^{238}$U and\/or $^{208}$Pb beams on a beryllium target was used for the production of secondary neutron-rich ion beams. These experiments are briefly described below. For details the reader is referred to the cited publications. Only the main results from each experiment are summarised here.\n\n\\subsection{$\\beta$-decay experiments around N$\\sim$126}\n\nThe first experiment~\\cite{Alvarez09,Alvarez10} used relativistic fragmentation of both $^{238}$U and $^{208}$Pb projectiles for the production of a large number of new neutron-rich nuclei. Several isotopic species were implanted in a stack of four double-sided silicon-strip detectors (DSSSDs)~\\cite{Kumar09}, which allowed for the position and time measurement of both implant- and decay-events. $\\beta$-Decay half-lives were determined after developing a new numerical method~\\cite{Kurtukian08}, which was needed to account for the high and complex background conditions. Half-lives have been published from $^{194}$Re ($N=119$) up to $^{202}$Ir ($N=125$)~\\cite{Kurtukian07,Kurtukian09}. From this experiment, another isotopes have been also analysed~\\cite{Morales11,Benlliure12}, but they will not be considered in this contribution because they have not been published yet.\n\nNew half-lives have been published recently beyond $N=126$, for $^{219}$Bi and $^{211,212,213}$Tl~\\cite{Benzoni12}. In the latter experiment, in addition to a stack of three DSSDs for measuring ion-implants and $\\beta$-decays, the RISING array~\\cite{Wollersheim05} of 105 HPGe detectors was used to detect $\\gamma$-rays following the $\\beta$-decays. As is nicely illustrated in Ref.~\\cite{Benzoni12}, by means of high-resolution $\\gamma$-ray spectroscopy it was possible to perform a validation of the numerical technique described in Ref.~\\cite{Kurtukian08}, which has been applied in both works~\\cite{Kurtukian09,Benzoni12}.\n\nFinally, the latest measurement in this mass region ($N>126$) used a stack of six SSSDs and three DSSSDs~\\cite{Steiger09} surrounded by a prototype of the BEta deLayEd Neutron (BELEN) detector~\\cite{Gomez11}. The latter allowed for the experimental determination of neutron-branching ratios. Technical details and a few preliminary results have been reported recently in Ref.~\\cite{Caballero13}. In summary, the $N>126$ nuclei $^{208-211}$Hg, $^{211-215}$Tl and $^{214-218}$Pb were implanted with sufficient statistics for a reliable analysis of their half-lives. As first approach, the numerical method~\\cite{Kurtukian08} was applied to the measured thallium isotopes. As reported in Ref.~\\cite{Caballero13}, the preliminary half-lives thus obtained are compatible, within the error bars, with those reported by Benzoni et al.~\\cite{Benzoni12} for the common Tl-nuclei. Alternatively, despite the large and complex background environment, we have investigated the possibility of using the more conventional analytical approach based on determining the half-life from implant-beta time correlations. This seems still feasible in those cases with relatively large implant rates (see Fig.~2 in Ref.~\\cite{Caballero13}). In the case of e.g. $^{212}$Tl, an ion implant-rate of $\\sim$3$\\times 10^{-5}$~counts\/pixel\/s was achieved, being the average rate of $\\beta$-like events of about 4$\\times 10^{-4}$~counts\/pixel\/s. Using this nuclide as example for illustration purposes, the corresponding implant-beta time correlations are shown in Fig.~\\ref{fig:212Tl}. The correlation area comprises the pixel where the ion was implanted, as well as the eight neighbouring pixels.\n\n\\begin{figure}[!htbp]\n\\begin{center}\n\\includegraphics[width=0.8\\textwidth]{hl_212Tl_s410_preliminary.eps}\n\\caption{\\label{fig:212Tl} (Left) $^{212}$Tl Implant-$\\beta$ time correlations in the forward direction (solid symbols) and in the backward direction (open symbols) for all decay events within the implant pixel and the 8 neighbouring pixels. All decay events over a broad time window of several times the expected half-life were considered. (Right) Background subtracted correlation spectrum. The quoted half-life value is preliminary.}\n\\end{center}\n\\end{figure}\n\nIn the spectra shown in Fig.~\\ref{fig:212Tl} each implanted $^{212}$Tl ion was correlated with all following $\\beta$-decay events registered inside the aforementioned correlation area of 9 mm$^2$. The time-window for correlations spanned several times the expected half-life. The background was estimated in a similar way, by performing backward ion-$\\beta$ time-correlations over the same DSSSD area. After background subtraction the resulting time-spectrum was adjusted to a simple exponential decay (the daughter nucleus $^{212}$Pb shows a half-life of 10.64~h\\cite{ensdf}), as shown in Fig.~\\ref{fig:212Tl}. The preliminary value obtained for the half-life of $^{212}$Tl was $t^{^{212}Tl}_{1\/2} = 44(20)$~s. This value, although smaller than the one obtained with the numerical method, is still in perfect agreement (within the quoted error bars) with the half-life reported previously. From the 14 isotopic species implanted, we expect that the analytical analysis method can be applied to derive the half-lives of at least half of them, those showing high implant statistics. The remaining cases will be analysed, most probably, by applying the numerical approach~\\cite{Kurtukian08}. \n\nThe results for the half-lives determined in these three experiments are displayed in Fig.~\\ref{fig:hl}-left. The solid red circle shows the preliminary result discussed above for the half-life of $^{212}$Tl.\n\n\\begin{figure}[!htbp]\n\\begin{center}\n\\includegraphics[width=0.49\\textwidth]{hl_theo_exp_NoMorales.eps}\n\\hfill\n\\includegraphics[width=0.49\\textwidth]{fig_theo_n126.eps}\n\\caption{\\label{fig:hl}(Left) Comparison between theoretical predictions and measurements in the region around $N=126$. Experimental values from $^{194}$Re to $^{202}$Ir are from Ref.~\\cite{Kurtukian09}, values between $^{211}$Tl and $^{219}$Bi are from Ref.~\\cite{Benzoni12}. (Right) Theoretical values published in the literature for the waiting point nuclei along $N=126$ (adapted from Ref.\\cite{Zhi13}).}\n\\end{center}\n\\end{figure}\n\n\\subsection{Theory versus experiment}\n\nAlthough still far from the $r$-process path, these measurements can be used to judge the reliability of theoretical models in the heavy neutron-rich region around $N \\sim 126$. Fig.~\\ref{fig:hl}-left shows a comparison between the aforementioned experimental data, and the two main theoretical calculations available in this region. The blue up-pointing triangles correspond to the model of P.~M\\\"oller~\\cite{Moller03}, commonly used in many $r$-process network calculations. This approach is based on the finite-range droplet mass model (FRDM) and uses the quasiparticle random-phase approximation (QRPA) for the Gamow-Teller (GT) part of the $\\beta$-strength function, whereas the gross-theory is implemented to account for the first-forbidden (FF) part of the $\\beta$-decay. \nThe pink down-pointing triangles correspond to the calculations by I.~Borzov using the Fayans energy-density functional (DF3 version) within a continuum QRPA framework~\\cite{Borzov03,Borzov11}. The latter approach allows for a self-consistent treatment of both allowed GT- and FF-transitions. Although the contribution of FF-transitions can be neglected at the $N=50$ and $N=82$ shell closures, it is expected that around $N = 126$ they contribute remarkably to the $\\beta$-strength distribution and even dominate it above Z$\\geq$70, thus providing a remarkable shortening of the half-lives in this mass region~\\cite{Borzov03}.\n\n\n\nIn comparison with the experimental data displayed on Fig.\\ref{fig:hl}-left the FRDM+QRPA calculations~\\cite{Moller03}, on average overestimate the measured half-lives by a factor of 24 (average ratio of calculation\/experiment) in the region $N\\leq 126$, and underestimate them by less than 40\\% beyond $N=126$ (the average ratio is 0.62). The CQRPA+DF3 model~\\cite{Borzov11} agrees better with the experimental data in the region $N\\leq 126$, showing an average deviation of only a factor of 2 in that region. The discrepancy becomes again $\\lesssim$40\\%, in average, beyond the neutron shell closure. \nAs already noted in Refs.~\\cite{Benlliure12,Benzoni12}, theoretical models seem to overestimate the half-lives before the $N=126$ neutron shell closure, and to underestimate them beyond it. The question, however, is how well these models will perform when one uses them to extrapolate further off stability, into the region of the waiting-point nuclei. To answer this question one can compare the theoretical models among them. In particular shell-model calculations~\\cite{Suzuki12,Zhi13}, which became recently available, seem quite helpful given the absence of any experimental information. Such a comparison is shown on the right-hand side of Fig.~\\ref{fig:hl}, which has been adapted from Ref.~\\cite{Zhi13}. As expected, the agreement of the different models near the neutron shell closure is much better than toward the valley of stability (see Fig.~\\ref{fig:hl}-left), owing to the simplified nature of the shell closure. Indeed, comparing the performance of the FRDM+QRPA model against large-scale shell model calculations~\\cite{Zhi13} one finds discrepancies, which range from $\\sim$30\\% up to a factor of 5, with an average deviation of ``only'' a factor of 2. In summary, one can conclude two things. First, the uncertainties on the beta-decay half-lives along the shell closure itself are expected to be less severe than in the $N<126, \\,\\, Z<82$ mass region covered by the $r$ process during freeze-out. Second, the latest measurements on both sides of $N=126$ provide a very stringent test for theoretical models indeed.\n\n\\subsection{R-process nucleosynthesis}\nThe impact of the variations in half-lives on the final abundances has been studied in several works (see e.g.~\\cite{Borzov08,Suzuki12,Arcones11} and references therein). We have performed new network calculations in order to illustrate the effect of the aforementioned shortening of the half-lives along the $N=126$ waiting-point nuclei, as well as in the $N<126,\\,\\, Z<82$ neighbourhood. In these calculations the so-called cold trajectory of Ref.~\\cite{Arcones11} was used in combination with the single-zone code of \\verb+NucNet Tools+~\\cite{Meyer12,libnucnet}. Rather standard parameters were used for the electron fraction ($Y_e \\approx 0.47$) and the entropy (S$\\approx 200 k_B\/$nuc). In order to accelerate the calculation of the $r$-process phase of the expansions, we\nused Krylov-space iterative matrix solver routines~\\cite{Saad03} as implemented in\nthe freely-available software \\verb+Sparskit2+. \nThe very sparse network matrix that\nmust be solved during the $r$-process phase is well-conditioned, so such\niterative solvers converge rapidly, and we found roughly a factor of ten\nspeed up over the default sparse matrix solver in \\verb+NucNet Tools+ when we\nused these iterative solvers.\nThe resulting abundances as a function of the mass number are shown in Fig.~\\ref{fig:yvsa}-left. The dotted open circles represent the $r$-process abundances in the solar system. The solid black curve shows a reference calculation performed with the JINA REACLIB library~\\cite{Cyburt10}. In the latter database, beta-decay rates are taken from Ref.~\\cite{Moller03} (FRDM+QRPA) where no data are available. The calculation provides a reasonable description of the solar $r$-process abundance pattern in the region of the second and third $r$-process peaks. Discrepancies such as the width of the peak or the region between both shell closures are not relevant for the present discussion, where the aim is to illustrate rather qualitatively the impact of reasonable half-life variations in this region. Reducing by a factor of two the FRDM+QRPA half-lives of the $N=126$ waiting-point nuclides, from gadolinium up to tantalum, one obtains the abundances represented in Fig.~\\ref{fig:yvsa}-left by the red-dashed line. A small shift towards higher masses is observed, similar to that reported e.g. in Ref.~\\cite{Suzuki12}. This seems to be in contradiction with the solar $r$-process abundances, where the maximum appears at a lower mass number around $A\\sim195$. Furthermore, as discussed above, the uncertainties on the half-lives of the nuclei in the $N<126$ region seem to be much larger than those at the shell closure itself. Obviously, the uncertainty will depend on the complexity of each nucleus and the difficulties of the model in reproducing the beta-strength distribution and the underlying nuclear structure details. Since it is difficult to assess an uncertainty on each particular nucleus, let us see what is the effect of a variation of the half-lives in that region by a constant factor, which is of the same order-of-magnitude as the discrepancies found between experiment and theory (FRDM+QRPA) discussed above. Assuming an average reduction of ``only'' a factor 12 (the average ratio theory\/experiment found above was $\\sim$24) in the half-lives of the nuclei on the left-hand side of $N=126$, from $N=116$ up to N$\\lesssim$ 125, one obtains a much stronger shift of the third $r$-process peak, towards higher masses, as shown in Fig.~\\ref{fig:yvsa}-left by the blue dot-dashed line. Additionally, the third $r$-process peak becomes narrower, an effect that disagrees further with the observed abundance pattern. Thus, one can conclude two things. First, a reduction of the FRDM+QRPA half-lives in the $N\\leq 126$ region, which is indeed expected both from experiment and theory, produces a shift of the third $r$-process peak which seems to go in the wrong direction, i.e. toward higher masses. Secondly, with the data currently available, the impact on the final abundances due to half-life uncertainties for nuclei in the region $N<126$, may be even more important than the uncertainties for the waiting-point nuclei themselves.\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.49\\textwidth]{yvsa_thl_npa_v2.eps}\n\\includegraphics[width=0.49\\textwidth]{yvsa_Pn_npa.eps}\n\\caption{\\label{fig:yvsa} Abundance versus mass number. Dotted open circles represent solar $r$-process abundances. The solid black curve corresponds to a reference calculation, where the decay rates are essentially based on FRDM+QRPA~\\cite{Moller03}. (Left panel) The dashed red line shows the same calculation with a factor of 2 reduction on the half-lives of the $N=126$ waiting point nuclei. The blue dash-dotted line shows the abundances obtained when the half-lives of the nuclei on the left-hand side of the waiting points, from $N=116$ up to $N=125$, were shortened by a factor of 12. See text for details. (Right panel) Dashed-dotted line shows the effect of a reduction by a factor of 10 on the neutron emission probabilities for nuclei in the $116 < N \\leq 126$ region. The blue dotted line corresponds to an enhancement, by a factor of 10, of the neutron emission probabilities of the same nuclei. The blue dashed line shows the impact of a reduction by a factor of 2 on the neutron branchings.}\n\\end{center}\n\\end{figure}\nFrom the nuclear physics point of view, the discrepancy between shorter half-lives and the position (and width) of the third $r$-process peak, may be counterbalanced by the effect of $\\beta$-delayed neutron emission. Indeed, for a given $Q_{\\beta}$ value, shorter half-lives usually correlate with a small neutron emission probability. The impact of (smaller) neutron emission probabilities on the final abundances is discussed in Sec.~\\ref{sec:neutrons}.\n\n\n\\section{$\\beta$-Delayed neutron emission around N$\\sim$126}\\label{sec:neutrons}\n\nThe emission of $\\beta$-delayed neutrons plays an important role in the formation of the third $r$-process peak~\\cite{Moller03,Arcones11}. However, the situation in terms of experimental information becomes really critical around $N \\sim 126$, as demonstrated in Fig.~\\ref{fig:pn}-left. \n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.45\\textwidth]{nc_Pn_rpath_npa.eps}\n\\includegraphics[width=0.35\\textwidth]{Pn_212Tl.eps}\n\\caption{\\label{fig:pn} (Left) Nuclear chart showing in red nuclei where experimental information on the neutron branching ratio is available~\\cite{ensdf}. (Right) Time spectrum showing implant-$\\beta$-neutron correlations for $^{212}$Tl.}\n\\end{center}\n\\end{figure}\nIn the latter figure, nuclei where any experimental information~\\cite{ensdf} is known about their beta-delayed neutron emission probability are shown in red. For the sake of comparison, the $r$-process path at some stage of a network calculation is also shown in the same figure.\nIt is worth emphasising that beyond $A \\sim 150$ there is practically no experimental information available. The only data available around $N\\sim126$ are on $^{210}$Tl~\\cite{Kogan57}.\n\nMotivated by this situation we used the BELEN neutron detector in the last experiment performed at GSI around $N \\gtrsim 126$. The detector consisted of 30 proportional counters of $^{3}$He embedded in a polyethylene matrix which served as moderator. More details about this measurement can be found in Ref.~\\cite{Caballero13}. The analysis of the beta-delayed neutron emission probabilities is still ongoing. Following the example shown before for $^{212}$Tl, the spectrum of beta-delayed neutrons registered after the decays of $^{212}$Tl is shown in Fig.~\\ref{fig:pn}-right. These are implant-$\\beta$-neutron correlations, which could be observed mostly due to the high efficiency of BELEN, which was of $\\sim$40\\%. Using this kind of correlation analysis, we expect to obtain neutron branching ratios (or upper limits) for all the implanted species.\n\nSince there is no experimental information on neutron-emission probabilities in the mass-region around $N=126$, it is not possible to study reliably the capability of theoretical models for predicting the effect of neutron emission. However, one can again compare the predictions of different models among themselves. One such comparison is shown in Fig.~17 of Ref.~\\cite{Zhi13} for the three models described above, FRDM+QRPA~\\cite{Moller03}, DF3+CQRPA~\\cite{Borzov11} and LSSM~\\cite{Zhi13}. For the waiting points with Z$<$69, the neutron emission probabilities predicted by the FRDM+QRPA are, on average, a factor of 3 larger than the values calculated with the LSSM. This ratio becomes $\\sim$0.6 for the waiting point nuclei with Z$\\geq$69, mostly due to the contribution of FF transitions. It is not straight-forward to predict, what the uncertainties in the calculated neutron-emission probabilities would be, in particular for the neutron-rich nuclei involved during the freeze-out phase. In this case, variations of the neutron-branching ratios are rather arbitrary, but they can still be used for a qualitative interpretation of the large uncertainties in these quantities.\nIn this respect, Fig.~\\ref{fig:yvsa}-right shows the effect of neutron-branching variations in the mass region $116 < N \\leq 126$. Final abundances obtained when the neutron emission probabilities are reduced by a factor of 2 are shown by the blue dashed line. The red dash-dotted line shows the effect of a factor of 10 reduction in the neutron branching ratios. The blue dotted line corresponds to a factor of 10 enhancement of the neutron branching ratios.\n\nIn summary, the discrepancy between shorter half-lives and the formation of the third peak discussed in Sec.~\\ref{sec:hl} could be compensated, at least to some extent, with a reduction of the neutron emission probabilities. Clearly, new $\\beta$-delayed neutron emission measurements are needed around $N \\sim 126$ in order to confirm such a hypothesis, and to reduce the still very large contribution of the nuclear physics input uncertainties to the calculated $r$-process abundances.\n\n\\section{Summary and outlook}\nIn summary, both theory and experiment, indicate that half-lives in the $N \\sim 126$ region near the $r$-process path should be much smaller than those commonly used in $r$-process model calculations (FRDM+QRPA model~\\cite{Moller03}). This, however, would imply a shift of the third $r$-process peak towards higher masses, an effect which is in contradiction with the observed $r$-process abundances. From the nuclear physics input, such a discrepancy could be removed if neutron emission probabilities were also smaller. This seems to be the case, as it has been shown by recent large-scale shell-model calculations~\\cite{Zhi13}. In the near future, first experimental values for the neutron emission probabilities of several nuclei beyond $N=126$ will become available from the experiment performed at GSI with the BELEN detector. This will represent a first test of the beta-strength distribution above the neutron separation energy, but clearly new measurements are needed in the neutron-rich heavy-mass region.\n\n\\section{Acknowledgments}\nWe thank the technical staff of the GSI accelerators, the FRS, and the target laboratory for their support during the S410 experiment. This work has been partially supported by the Spanish Ministry of Economy and Competitivity under grants FPA2011-24553, FPA 2011-28770-C03-03 and AIC-D-2011-0705. I.D., A.Ev., and M.M. are supported by the Helmholtz association via the Young Investigators group LISA (VH-NG-627).\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\indent \n\nThe theoretical and experimental studying\nthe vector ($W^\\pm$ and $Z^0$) boson production at high energies provide\ninformation about the nature of both the underlying electroweak\ninteraction and the effects of Quantum Chromodynamics (QCD).\nIn many respects these processes have become\none of most important \"standard candles\" in experimental\nhigh energy physics~[1--9]. At the Tevatron, measurements of $W^\\pm$ and $Z^0$\ninclusive cross sections are routinely used to validate\ndetector and trigger perfomance and stability. \nData from gauge boson production also provide\nbounds on parametrizations used to describe the non-perturbative\nregime of QCD processes.\nAt the LHC, such measurements can serve as a useful tool to determine\nthe integrated luminosity and can also be used\nto normalize measurements of other production\ncross sections (for example, cross section of $W + n$-jets or \ndiboson production). \nAdditionally, studying of inclusive vector boson \nproduction is necessary starting point for investigations of \nHiggs or top quark production where many signatures can include these bosons.\n\nAt leading order (LO) of QCD, $W^\\pm$ and $Z^0$ bosons are produced\nvia quark-antiquark annihilation. Beyond the LO Born process,\nvector boson can also be produced by $q + g$ interactions,\nso both the quark and gluon distribution functions of the proton\nplay an important role. Theoretical calculations of the \n$W^\\pm$ and $Z^0$ production cross sections have been \ncarried out at next-to-leading order (NLO) and next-to-next-to-leading\norder (NNLO)~[10--14] of QCD. The NLO cross section is $\\sim 25$\\%\nlarger than the Born-level cross section, and the NNLO cross\nsection is an additional $\\sim 3$\\% higher. \nHowever, these perturbative calculations are reliable at high $p_T$ only\nsince diverge in the small $p_T \\ll m$ region with terms\nproportional to $\\ln m\/p_T$ (appearing due to soft\nand collinear gluon emission). Therefore, the soft gluon\nresummation technique~[15--19] should be used to make QCD predictions at low $p_T$.\nThe traditional calculations combine fixed-order\nperturbation theory with analytic resummation and some matching criterion.\nThe analytic resummation can be performed either\nin the transverse momentum space~[20] or in the Fourier\nconjugate impact parameter space~[21].\nDifferences between the two formalisms are discussed\nin~[22].\n\nAn alternative description can be provided by the\n$k_T$-factorization approach of QCD~[23, 24].\nThis approach is based on the familiar\nBalitsky-Fadin-Kuraev-Lipatov (BFKL)~[25] \nor Catani-Ciafaloni-Fiorani-Marchesini (CCFM)~[26] gluon evolution \nequations and takes into account\nthe large logarithmic terms proportional to $\\ln 1\/x$.\nThis contrasts with the usual \nDokshitzer-Gribov-Lipatov-Altarelli-Parisi \n(DGLAP)~[27] strategy where only the large \nlogarithmic terms proportional to $\\ln \\mu^2$ are taken into account.\nThe basic dynamical quantity of the $k_T$-factorization approach is \nthe unintegrated (i.e., ${\\mathbf k}_T$-dependent) parton distribution \n$f_a(x,{\\mathbf k}_T^2,\\mu^2)$ which determines the probability to find a \ntype $a$ parton carrying the longitudinal momentum fraction $x$ and the \ntransverse momentum ${\\mathbf k}_T$ at the probing scale $\\mu^2$.\nIn this approach, since each incoming parton carries its own nonzero\ntransverse momentum, the Born-level subprocess \n$q + \\bar q^\\prime \\to W^\\pm\/Z^0$ already generate the $p_T$ distribution\nof produced vector boson.\nSimilar to DGLAP, to calculate the cross sections of any \nphysical process the unintegrated parton density \n$f_a(x,{\\mathbf k}_T^2,\\mu^2)$ \nhas to be convoluted~[23, 24] with the relevant partonic cross section \nwhich has to be taken off mass shell (${\\mathbf k}_T$-dependent). \nThe soft gluon resummation formulas \nare the result of the approximate treatment of \nthe solutions of the CCFM evolution equation~[28].\nOther important properties of the \n$k_T$-factorization formalism are the additional contribution to the cross \nsections due to the integration over the ${\\mathbf k}_T^2$ region above $\\mu^2$\nand the broadening of the transverse momentum distributions due to extra \ntransverse momentum of the colliding partons\\footnote{For an introduction\nto $k_T$-factorization, see, for example, review~[29].}.\n\nThe $k_T$-factorization formalism has been already \napplied~[30] to calculate transverse momentum distribution of the \ninclusive $W^\\pm$ and $Z^0$ production at Tevatron. \nThe calculations~[30] were based on the usual \n(on-mass shell) matrix elements of the quark-antiquark \nannihilation subprocess $q + \\bar q^\\prime \\to W^\\pm\/Z^0$\nwhich embedded in precise off-shell kinematics.\nHowever, an important component of the calculations~[30] is the \nunintegrated quark distribution in a proton. At present these distributions\nare only available in the framework of the Kimber-Martin-Ryskin (KMR) approach~[31] \nsince there are some theoretical difficulties \nin obtaining the quark densities immediately from CCFM or BFKL \nequations\\footnote{Unintegrated quark density was considered recently in~[32].}\n(see, for example, review~[29] for more details).\nAs a result the dependence of the calculated cross sections \non the non-collinear evolution scheme has not been investigated.\nThis dependence in general can be significant and it is a special \nsubject of study in the $k_T$-factorization formalism. \nTherefore in the present paper we will try a different and more systematic way.\nInstead of using the unintegrated quark distributions and \nthe corresponding quark-antiquark annihilation cross\nsection we calculate off-shell matrix element of \nthe $g^* + g^* \\to W^\\pm\/Z^0 + q \\bar q^\\prime$ subprocess\nand then operate in terms of the unintegrated gluon \ndensities only. In this scenario, at the price of considering the \n$2 \\to 3$ rather than $2 \\to 1$ matrix \nelements, the problem of unknown unintegrated quark\ndistributions will reduced to the problem of gluon distributions.\nHowever, since the gluons are only responsible for the appearance of the sea but \nnot valence quarks, the contribution from the valence quarks should be \ncalculated separately. Having in mind that the valence quarks are only \nimportant at large $x$, where the traditional DGLAP evolution is accurate and \nreliable, this contribution can be taken into account within the usual collinear \nscheme based on the $q + g^* \\to W^\\pm\/Z^0 + q^\\prime$ and \n$q + \\bar q^\\prime \\to W^\\pm\/Z^0$ matrix elements\nconvoluted with the on-shell valence quark and\/or off-shell gluon \ndensities\\footnote{To avoid the double counting we have not considered here\n$q + \\bar q^\\prime \\to W^\\pm\/Z^0 + g$ subprocess.}.\nThus, the proposed way enables us with making comparisons \nbetween the different parton \nevolution schemes and parametrizations of parton \ndensities\\footnote{The similar scenario has been applied recently\nto the prompt photon hadroproduction at Tevatron~[33].}.\n\nWe should mention, of course, that this idea can only\nwork well if the sea quarks appear from the last step\nof the gluon evolution --- then we can absorb this\nlast step of the gluon ladder into hard matrix element.\nHowever, this method does not apply to the quarks\ncoming from the earlier steps of the evolution\n(i.e., from the second-to-last, third-to-last and other gluon\nsplittings). But it is not evident in advance, whether\nthe last gluon splitting dominates or not. The goal of our\nstudy is to clarify this point.\n\nThe outline of our paper is following. In Section~2 we \nrecall shortly the basic formulas of the $k_T$-factorization approach with a brief \nreview of calculation steps and the unintegrated \nparton densities used. We will concentrate mainly\non the $g^* + g^* \\to W^\\pm\/Z^0 + q \\bar q^\\prime$\nsubprocess. The evaluation of $q + g^* \\to W^\\pm\/Z^0 + q^\\prime$ and \n$q + \\bar q^\\prime \\to W^\\pm\/Z^0$ contributions is\nstraightforward and, for the reader's convenience, we only collect \nthe main relevant formulas in Appendix.\nIn Section~3 we present the numerical results\nof our calculations. \nThe central point is discussing the role\nof each contribution mentioned above to the cross section.\nSpecial attention is put on the transverse momentum\ndistributions of the $W^\\pm$ and $Z^0$ boson \nmeasured by the D$\\oslash$~[5, 8, 9] and CDF~[4] collaborations.\nSection~4 contains our conclusions.\n\n\\section{Theoretical framework} \\indent\n\nAs the off-shell gluon-gluon fusion $g^* + g^* \\to W^\\pm\/Z^0 + q \\bar q^\\prime$\nis calculated for the first time in the literature,\nwe find it reasonable to explain it in more detail.\n\n\\subsection{Kinematics} \\indent \n\nWe start from the kinematics (see Fig.~1). \nLet $p^{(1)}$ and $p^{(2)}$ be the four-momenta of the incoming protons and \n$p$ the four-momentum of the produced $W^\\pm$\/$Z^0$ boson.\nThe initial off-shell gluons have the four-momenta\n$k_1$ and $k_2$ and the final quark $q$ and antiquark $\\bar q^\\prime$ have the \nfour-momenta $p_1$ and $p_2$ and the masses $m_1$ and $m_2$, respectively.\nIn the $p \\bar p$ center-of-mass frame we can write\n$$\n p^{(1)} = {\\sqrt s\\over 2} (1,0,0,1),\\quad p^{(2)} = {\\sqrt s\\over 2} (1,0,0,-1), \\eqno(1)\n$$\n\n\\noindent\nwhere $\\sqrt s$ is the total energy of the process \nunder consideration and we neglect the masses of the incoming protons.\nThe initial gluon four-momenta in the high energy limit can be written as\n$$\n k_1 = x_1 p^{(1)} + k_{1T},\\quad k_2 = x_2 p^{(2)} + k_{2T}, \\eqno(2)\n$$\n\n\\noindent \nwhere $k_{1T}$ and $k_{2T}$ are the transverse four-momenta.\nIt is important that ${\\mathbf k}_{1T}^2 = - k_{1T}^2 \\neq 0$ and\n${\\mathbf k}_{2T}^2 = - k_{2T}^2 \\neq 0$. From the conservation laws \nwe can obtain the following relations:\n$$\n {\\mathbf k}_{1T} + {\\mathbf k}_{2T} = {\\mathbf p}_{1T} + {\\mathbf p}_{2T} + {\\mathbf p}_{T},\n$$\n$$\n x_1 \\sqrt s = m_{1T} e^{y_1} + m_{2T} e^{y_2} + m_T e^y, \\eqno(3)\n$$\n$$\n x_2 \\sqrt s = m_{1T} e^{-y_1} + m_{2T} e^{-y_2} + m_T e^{-y},\n$$\n\n\\noindent \nwhere $p_T$, $m_T$ and $y$ are the transverse momentum, \ntransverse mass and center-of-mass rapidity of produced $W^\\pm$\/$Z^0$ boson, \n$p_{1T}$ and $p_{2T}$ are the transverse momenta of final quark\n$q$ and antiquark $\\bar q^\\prime$, $y_1$, $y_2$, $m_{1T}$ and $m_{2T}$ \nare their rapidities and \ntransverse masses, i.e. $m_{iT}^2 = m_i^2 + {\\mathbf p}_{iT}^2$.\n\n\\subsection{Off-shell amplitude of the $g^* + g^* \\to W^\\pm\/Z^0 + q \\bar q^\\prime$ subprocess} \\indent \n\nThere are eight Feynman diagrams (see Fig.~2) which describe the partonic\nsubprocess $g^* + g^* \\to W^\\pm\/Z^0 + q \\bar q^\\prime$ at $\\alpha \\alpha_s^2$ order.\nLet $\\epsilon_1$, $\\epsilon_2$ and $\\epsilon$ be the initial gluon and produced gauge boson \npolarization vectors, respectively, and $a$ and $b$ the eight-fold color indices of the off-shell\ngluons.\nThen the relevant matrix element can be presented as follows:\n$$\n {\\cal M}_1 = g^2 \\, \\bar u (p_1) \\, t^a \\gamma^\\mu \\epsilon_\\mu {\\hat p_1 - \\hat k_1 + m_1\\over m_1^2 - (p_1 - k_1)^2} T_{W,Z}^\\lambda \\, \\epsilon_\\lambda {\\hat k_2 - \\hat p_2 + m_2\\over m_2^2 - (k_2 - p_2)^2} t^b \\gamma^\\nu \\epsilon_\\nu \\, u(p_2), \\eqno(4)\n$$\n$$\n {\\cal M}_2 = g^2 \\, \\bar u (p_1) \\, t^b \\gamma^\\nu \\epsilon_\\nu {\\hat p_1 - \\hat k_2 + m_1\\over m_1^2 - (p_1 - k_2)^2} T_{W,Z}^\\lambda \\, \\epsilon_\\lambda {\\hat k_1 - \\hat p_2 + m_2\\over m_2^2 - (k_1 - p_2)^2} t^a \\gamma^\\mu \\epsilon_\\mu \\, u(p_2), \\eqno(5)\n$$\n$$\n {\\cal M}_3 = g^2 \\, \\bar u (p_1) \\, t^a \\gamma^\\mu \\epsilon_\\mu {\\hat p_1 - \\hat k_1 + m_1\\over m_1^2 - (p_1 - k_1)^2}\\, t^b \\gamma^\\nu \\epsilon_\\nu { - \\hat p_2 - \\hat p + m_1\\over m_1^2 - ( - p_2 - p)^2} T_{W,Z}^\\lambda \\, \\epsilon_\\lambda \\, u(p_2), \\eqno(6)\n$$\n$$\n {\\cal M}_4 = g^2 \\, \\bar u (p_1) \\, t^b \\gamma^\\nu \\epsilon_\\nu {\\hat p_1 - \\hat k_2 + m_1\\over m_1^2 - (p_1 - k_2)^2}\\, t^a \\gamma^\\mu \\epsilon_\\mu { - \\hat p_2 - \\hat p + m_1\\over m_1^2 - ( - p_2 - p)^2} T_{W,Z}^\\lambda \\, \\epsilon_\\lambda \\, u(p_2), \\eqno(7)\n$$\n$$\n {\\cal M}_5 = g^2 \\, \\bar u (p_1) \\, T_{W,Z}^\\lambda \\, \\epsilon_\\lambda {\\hat p_1 + \\hat p + m_2\\over m_2^2 - (p_1 + p)^2}\\, t^b \\gamma^\\nu \\epsilon_\\nu { \\hat k_1 - \\hat p_2 + m_2\\over m_2^2 - (k_1 - p_2)^2} t^a \\gamma^\\mu \\epsilon_\\mu \\, u(p_2), \\eqno(8)\n$$\n$$\n {\\cal M}_6 = g^2 \\, \\bar u (p_1) \\, T_{W,Z}^\\lambda \\, \\epsilon_\\lambda {\\hat p_1 + \\hat p + m_2\\over m_2^2 - (p_1 + p)^2}\\, t^a \\gamma^\\mu \\epsilon_\\mu { \\hat k_2 - \\hat p_2 + m_2\\over m_2^2 - (k_2 - p_2)^2} t^b \\gamma^\\nu \\epsilon_\\nu \\, u(p_2), \\eqno(9)\n$$\n$$\n \\displaystyle {\\cal M}_7 = g^2 \\, \\bar u (p_1) \\, \\gamma^\\rho C^{\\mu \\nu \\rho}(k_1,k_2,- k_1 - k_2){\\epsilon_\\mu \\epsilon_\\nu \\over (k_1 + k_2)^2} f^{abc} t^c \\times \\atop \n \\displaystyle \\times { - \\hat p_2 - \\hat p + m_1\\over m_1^2 - ( - p_2 - p)^2}\\, T_{W,Z}^\\lambda \\, \\epsilon_\\lambda \\, u(p_2), \\eqno(10)\n$$\n$$\n \\displaystyle {\\cal M}_8 = g^2 \\, \\bar u (p_1) \\, T_{W,Z}^\\lambda \\, \\epsilon_\\lambda {\\hat p_1 + \\hat p + m_2\\over m_2^2 - (p_1 + p)^2} \\times \\atop \n \\displaystyle \\times \\gamma^\\rho C^{\\mu \\nu \\rho}(k_1,k_2,- k_1 - k_2) {\\epsilon_\\mu \\epsilon_\\nu \\over (k_1 + k_2)^2} f^{abc} t^c \\, u(p_2). \\eqno(11)\n$$\n\n\\vspace{0.2cm}\n\n\\noindent\nIn the above expressions $C^{\\mu \\nu \\rho}(k,p,q)$ and $T_{W,Z}^\\lambda$ are related to the standard QCD\nthree-gluon coupling and the $W^\\pm$\/$Z^0$-fermion vertexes:\n$$\n C^{\\mu \\nu \\rho}(k,p,q) = g^{\\mu \\nu} (p - k)^\\rho + g^{\\nu \\rho} (q - p)^\\mu + g^{\\rho \\mu} (k - q)^\\nu, \\eqno(12)\n$$\n$$\n T^\\lambda_W = {e\\over 2 \\sqrt 2 \\sin \\theta_W} \\gamma^\\lambda (1 - \\gamma^5) V_{qq^\\prime}, \\eqno(13)\n$$\n$$\n T^\\lambda_Z = {e\\over \\sin 2 \\theta_W} \\gamma^\\lambda \\left[I_3^{(q)}(1 - \\gamma^5) - 2 e_q \\sin^2 \\theta_W\\right], \\eqno(14)\n$$\n\n\\noindent\nwhere $I_3^{(q)}$ and $e_q$ are the weak isospin and the fractional electric charge \n(in the positron charge $e$ units) of final-state quark $q$, \n$\\theta_W$ is the Weinberg mixing angle and $V_{qq^\\prime}$ is the\nCabibbo-Kobayashi-Maskawa (CKM) matrix element. Of course, in\nthe case of $Z^0$ production $m_1$ equals $m_2$.\nThe summation on the $W^\\pm$\/$Z^0$ polarization is carried out by the\ncovariant formula\n$$\n \\sum \\epsilon^\\mu (p) \\, \\epsilon^{* \\, \\nu} (p) = - g^{\\mu \\nu} + {p^\\mu p^\\nu\\over m^2}. \\eqno(15)\n$$\n\n\\noindent\nIn the case of the initial off-shell gluon we use the BFKL prescription~[23, 24]:\n$$\n \\sum \\epsilon^\\mu (k_i) \\, \\epsilon^{* \\, \\nu} (k_i) = {k_{iT}^\\mu k_{iT}^\\nu \\over {\\mathbf k}_{iT}^2}. \\eqno(16)\n$$\n\n\\noindent\nThis formula converges to the usual expression \n$\\sum \\epsilon^\\mu \\epsilon^{* \\, \\nu} = -g^{\\mu \\nu}$ \nafter azimuthal angle averaging\nin the $k_T \\to 0$ limit. \nThe evaluation of the traces in~(4) --- (11) was done using the algebraic \nmanipulation system \\textsc{Form}~[34]. \nWe would like to mention here that the usual method \nof squaring of~(4) --- (11) results in enormously long\noutput. This technical problem was solved by applying the\nmethod of orthogonal amplitudes~[35].\n\nThe gauge invariance of the matrix element is a\nsubject of special attention in the $k_T$-factorization approach. Strictly speaking,\nthe diagrams shown in Fig.~2 are insufficient and have to be accompanied\nwith the graphs involving direct gluon exchange between the protons\n(these protons are not shown in Fig.~2). These graphs are \nnecessary to maintain the gauge invariance.\nHowever, they violate the factorization since they cannot be represented\nas a convolution of the gluon-gluon fusion matrix element with unintegrated gluon density.\nThe solution pointed out in~[24] refers to the fact that, within the \nparticular gauge~(16), the contribution from these unfactorizable diagrams\nvanish, and one has to only take into account the graphs depicted in Fig.~2.\nWe have successfully tested the gauge invariance of the matrix \nelement~(4) --- (11) numerically\\footnote{At the\npreliminary stage of the work we have made a cross-check \nof the matrix elements which have been \ncalculated independently by M.~Deak and F.~Schwennsen.}.\n\n\\subsection{Cross section for the inclusive $W^\\pm$\/$Z^0$ production} \\indent \n\nAccording to the $k_T$-factorization theorem, the \ninclusive $W^\\pm$\/$Z^0$ production cross section \nvia two off-shell gluon fusion \ncan be written as a convolution\n$$\n \\displaystyle \\sigma (p + \\bar p \\to W^\\pm\/Z^0 + X) = \\sum_{q} \\int {dx_1\\over x_1} f_g(x_1,{\\mathbf k}_{1 T}^2,\\mu^2) d{\\mathbf k}_{1 T}^2 {d\\phi_1\\over 2\\pi} \\times \\atop \n \\displaystyle \\times \\int {dx_2\\over x_2} f_g(x_2,{\\mathbf k}_{2 T}^2,\\mu^2) d{\\mathbf k}_{2 T}^2 {d\\phi_2\\over 2\\pi} d{\\hat \\sigma} (g^* + g^* \\to W^\\pm\/Z^0 + q \\bar q^\\prime), \\eqno(17)\n$$\n\n\\noindent \nwhere $\\hat \\sigma(g^* + g^* \\to W^\\pm\/Z^0 + q \\bar q^\\prime)$ is the partonic cross section, \n$f_g(x,{\\mathbf k}_{T}^2,\\mu^2)$ is the unintegrated gluon distribution in a proton \nand $\\phi_1$ and $\\phi_2$ are the azimuthal angles of the incoming gluons.\nThe multiparticle phase space $\\Pi d^3 p_i \/ 2 E_i \\delta^{(4)} (\\sum p^{\\rm in} - \\sum p^{\\rm out} )$\nis parametrized in terms of transverse momenta, rapidities and azimuthal angles:\n$$\n { d^3 p_i \\over 2 E_i} = {\\pi \\over 2} \\, d {\\mathbf p}_{iT}^2 \\, dy_i \\, { d \\phi_i \\over 2 \\pi}. \\eqno(18)\n$$\n\n\\noindent\nUsing the expressions~(17) and~(18) we obtain the master formula:\n$$\n \\displaystyle \\sigma(p + \\bar p \\to W^\\pm\/Z^0 + X) = \\sum_{q} \\int {1\\over 256\\pi^3 (x_1 x_2 s)^2} |\\bar {\\cal M}(g^* + g^* \\to W^\\pm\/Z^0 + q \\bar q^\\prime)|^2 \\times \\atop \n \\displaystyle \\times f_g(x_1,{\\mathbf k}_{1 T}^2,\\mu^2) f_g(x_2,{\\mathbf k}_{2 T}^2,\\mu^2) d{\\mathbf k}_{1 T}^2 d{\\mathbf k}_{2 T}^2 d{\\mathbf p}_{1 T}^2 {\\mathbf p}_{2 T}^2 dy dy_1 dy_2 {d\\phi_1\\over 2\\pi} {d\\phi_2\\over 2\\pi} {d\\psi_1\\over 2\\pi} {d\\psi_2\\over 2\\pi}, \\eqno(19)\n$$\n\n\\noindent\nwhere $|\\bar {\\cal M}(g^* + g^* \\to W^\\pm\/Z^0 + q \\bar q^\\prime)|^2$ is the off-mass shell \nmatrix element squared and averaged over initial gluon \npolarizations and colors, $\\psi_1$ and $\\psi_2$ are the \nazimuthal angles of the final state quark and antiquark, respectively.\nWe would like to point out again that $|\\bar {\\cal M}(g^* + g^* \\to W^\\pm\/Z^0 + q \\bar q^\\prime)|^2$\nstrongly depends on the nonzero \ntransverse momenta ${\\mathbf k}_{1 T}^2$ and ${\\mathbf k}_{2 T}^2$.\nIf we average the expression~(19) over $\\phi_{1}$ and $\\phi_{2}$ \nand take the limit ${\\mathbf k}_{1 T}^2 \\to 0$ and ${\\mathbf k}_{2 T}^2 \\to 0$,\nthen we recover the expression for the $W^\\pm$\/$Z^0$ production cross section in the \ncollinear $\\alpha \\alpha_s^2$ approximation.\n\nThe multidimensional integration in~(19) has been performed\nby means of the Monte Carlo technique, using the routine \n\\textsc{Vegas}~[36]. The full C$++$ code is available from the \nauthors upon request\\footnote{lipatov@theory.sinp.msu.ru}.\n\n\\subsection{The KMR unintegrated parton distributions} \\indent \n\nIn the present paper we have tried two different sets of \nunintegrated parton densities in a proton. First of them\nis the Kimber-Martin-Ryskin set.\n\nThe KMR approach~[31] is the formalism to construct\nparton distributions $f_a(x,{\\mathbf k}_T^2,\\mu^2)$ unintegrated over the parton \ntransverse momenta ${\\mathbf k}_T^2$ from the known conventional parton\ndistributions $xa(x,\\mu^2)$, where $a = g$ or $a = q$. This formalism \nis valid for a proton as well as a photon and\ncan embody both DGLAP and BFKL contributions. It also accounts for \nthe angular ordering which comes from coherence effects in gluon emission.\nThe key observation here is that the $\\mu$ dependence of the unintegrated \nparton distributions $f_a(x,{\\mathbf k}_T^2,\\mu^2)$ enters at the last step\nof the evolution, and therefore single scale evolution equations (pure DGLAP)\ncan be used up to this step. In this approximation, the unintegrated quark and \ngluon distributions are given by the expressions~[31]\n$$\n \\displaystyle f_q(x,{\\mathbf k}_T^2,\\mu^2) = T_q({\\mathbf k}_T^2,\\mu^2) {\\alpha_s({\\mathbf k}_T^2)\\over 2\\pi} \\times \\atop {\n \\displaystyle \\times \\int\\limits_x^1 dz \\left[P_{qq}(z) {x\\over z} q\\left({x\\over z},{\\mathbf k}_T^2\\right) \\Theta\\left(\\Delta - z\\right) + P_{qg}(z) {x\\over z} g\\left({x\\over z},{\\mathbf k}_T^2\\right) \\right],} \\eqno (20)\n$$\n$$\n \\displaystyle f_g(x,{\\mathbf k}_T^2,\\mu^2) = T_g({\\mathbf k}_T^2,\\mu^2) {\\alpha_s({\\mathbf k}_T^2)\\over 2\\pi} \\times \\atop {\n \\displaystyle \\times \\int\\limits_x^1 dz \\left[\\sum_q P_{gq}(z) {x\\over z} q\\left({x\\over z},{\\mathbf k}_T^2\\right) + P_{gg}(z) {x\\over z} g\\left({x\\over z},{\\mathbf k}_T^2\\right)\\Theta\\left(\\Delta - z\\right) \\right],} \\eqno (21)\n$$\n\n\\noindent\nwhere $P_{ab}(z)$ are the usual unregulated leading order DGLAP splitting \nfunctions, and $q(x,\\mu^2)$ and $g(x,\\mu^2)$ are the conventional quark \nand gluon densities\\footnote{Numerically, we have used the \nstandard GRV~(LO) parametrizations~[37].}. The theta functions which appear \nin~(20) and~(21) imply \nthe angular-ordering constraint $\\Delta = \\mu\/(\\mu + |{\\mathbf k}_T|)$ \nspecifically to the last evolution step to regulate the soft gluon\nsingularities. For other evolution steps, the strong ordering in \ntransverse momentum within the DGLAP equations automatically \nensures angular ordering. It is important that the parton \ndistributions $f_a(x,{\\mathbf k}_T^2,\\mu^2)$ extended now into \nthe ${\\mathbf k}_T^2 > \\mu^2$ region. This fact is in the clear contrast with the \nusual DGLAP evolution\\footnote{We would like to note that \ncut-off $\\Delta$ can be taken $\\Delta = |{\\mathbf k}_T|\/\\mu$ also~[31]. \nIn this case the unintegrated parton distributions given by (20) --- (21) \nvanish for ${\\mathbf k}_T^2 > \\mu^2$ in accordance with \nthe DGLAP strong ordering in ${\\mathbf k}_T^2$.}.\n\nThe virtual (loop) contributions may be resummed \nto all orders by the quark and gluon Sudakov form factors\n$$\n \\ln T_q({\\mathbf k}_T^2,\\mu^2) = - \\int\\limits_{{\\mathbf k}_T^2}^{\\mu^2} {d {\\mathbf p}_T^2\\over {\\mathbf p}_T^2} {\\alpha_s({\\mathbf p}_T^2)\\over 2\\pi} \\int\\limits_0^{z_{\\rm max}} dz P_{qq}(z), \\eqno (22)\n$$\n$$\n \\ln T_g({\\mathbf k}_T^2,\\mu^2) = - \\int\\limits_{{\\mathbf k}_T^2}^{\\mu^2} {d {\\mathbf p}_T^2\\over {\\mathbf p}_T^2} {\\alpha_s({\\mathbf p}_T^2)\\over 2\\pi} \\left[ n_f \\int\\limits_0^1 dz P_{qg}(z) + \\int\\limits_{z_{\\rm min}}^{z_{\\rm max}} dz z P_{gg}(z) \\right], \\eqno (23)\n$$\n\n\\noindent\nwhere $z_{\\rm max} = 1 - z_{\\rm min} = {\\mu\/({\\mu + |{\\mathbf p}_T|}})$.\nThe form factors $T_a({\\mathbf k}_T^2,\\mu^2)$ give the probability of \nevolving from a scale ${\\mathbf k}_T^2$ to a scale $\\mu^2$ without \nparton emission. In according with~(22) and~(23)\n$T_a({\\mathbf k}_T^2,\\mu^2) = 1$ in the ${\\mathbf k}_T^2 > \\mu^2$ region.\n\nNote that such definition of the $f_a(x,{\\mathbf k}_T^2,\\mu^2)$ is \ncorrect for ${\\mathbf k}_T^2 > \\mu_0^2$ only, where \n$\\mu_0 \\sim 1$ GeV is the minimum scale for which DGLAP evolution of \nthe collinear parton densities is valid. Everywhere in our numerical \ncalculations we set the starting scale $\\mu_0$ to be equal $\\mu_0 = 1$ GeV.\nSince the starting point of this derivation is the leading order \nDGLAP equations, the unintegrated parton distributions must satisfy\nthe normalisation condition\n$$\n a(x,\\mu^2) = \\int\\limits_0^{\\mu^2} f_a(x,{\\mathbf k}_T^2,\\mu^2) d{\\mathbf k}_T^2. \\eqno(24)\n$$\n\n\\noindent\nThis relation will be exactly satisfied if one define~[31]\n$$\n f_a(x,{\\mathbf k}_T^2,\\mu^2)\\vert_{{\\mathbf k}_T^2 < \\mu_0^2} = a(x,\\mu_0^2) T_a(\\mu_0^2,\\mu^2). \\eqno(25)\n$$\n\n\\subsection{The CCFM unintegrated gluon distribution} \\indent \n\nThe CCFM gluon density has been obtained~[38] \nfrom the numerical solution of the CCFM equation. \nThe function $f_g(x,{\\mathbf k}_T^2,\\mu^2)$ is determined\nby a convolution of the non-perturbative starting\ndistribution $f_g^{(0)}(x)$ and the CCFM evolution kernel\ndenoted by $\\tilde {\\cal A}(x,{\\mathbf k}_T^2,\\mu^2)$:\n$$\n f_g(x,{\\mathbf k}_T^2,\\mu^2) = \\int {d x'\\over x'} f_g^{(0)}(x') \\tilde {\\cal A}\\left({x\\over x'},{\\mathbf k}_T^2,\\mu^2\\right). \\eqno(26)\n$$\n\n\\noindent\nIn the perturbative evolution the gluon splitting function\n$P_{gg}(z)$ including nonsingular terms\nis applied, as it was described in~[39]. The input parameters in $f_g^{(0)}(x)$\nwere fitted to reproduce the proton structure functions $F_2(x,Q^2)$.\nAn acceptable fit to the measured $F_2$ values was obtained~[38] with\n$\\chi^2\/ndf = 1.83$ using statistical and uncorrelated systematic\nuncertainties (compare to $\\chi^2\/ndf \\sim 1.5$ in the collinear approach\nat NLO).\n\n\\section{Numerical results} \\indent\n\nWe are now in a position to present our numerical results.\nFirst we describe the theoretical uncertainties of\nour consideration.\n\nExcept the unintegrated parton distributions in a \nproton $f_q(x,{\\mathbf k}_T^2,\\mu^2)$,\nthere are several parameters which determined the overall \nnormalization factor of the calculated $W^\\pm\/Z^0$ cross sections: \nthe quark masses $m_1$ and $m_2$ and the factorization and \nrenormalization scales $\\mu_F$ and $\\mu_R$\n(the first of them is related to the evolution of the parton distributions, \nthe other is responsible for the strong coupling constant).\nIn the numerical calculations the masses of light \nquarks were set to be equal to $m_u = 4.5$~MeV, $m_d = 8.5$~MeV, \n$m_s = 155$~MeV and the \ncharmed quark mass was set to $m_c = 1.5$~GeV. \nWe have checked that uncertainties which come \nfrom these quantities are negligible in comparison to the uncertainties\nconnected with the scale and\/or the unintegrated parton densities.\nAs it is often done, we choose the \nrenormalization and factorization scales to be equal: \n$\\mu_R = \\mu_F = \\mu = \\xi m_T$ (transverse mass of the\nproduced vector boson).\nIn order to investigate the scale dependence of our \nresults we vary the scale parameter\n$\\xi$ between $1\/2$ and~2 about the default value $\\xi = 1$.\nFor completeness, we set $m_W = 80.403$~GeV, $m_Z = 91.1876$~GeV,\n$\\sin^2 \\theta_W = 0.23122$ and use the LO formula for the strong \ncoupling constant $\\alpha_s(\\mu^2)$ with $n_f = 4$ \nactive quark flavors at \n$\\Lambda_{\\rm QCD} = 200$~MeV (so that $\\alpha_s(M_Z^2) = 0.1232$).\nNote that we use a special choice $\\Lambda_{\\rm QCD} = 130$~MeV \nin the case of CCFM gluon ($\\alpha_s(M_Z^2) = 0.1187$), \nas it was originally proposed in~[38].\n\nBefore we proceed to the numerical results,\nwe would like to comment on the effect of the\nhigher order QCD contributions~[30]. It is well-known\nthat the leading-order $k_T$-factorization approach\nnaturally includes a large part of them\\footnote{See, for example,\nreview~[29] for more details.}.\nIt is a corrections which are kinematic in nature \narising from the real parton emission during the\nevolution cascade. Another part of high-order contributions comes from\nthe logarithmic loop corrections which have\nalready been included in the Sudakov form factors~(22) and~(23).\nHowever, there are also the non-logarithmic loop corrections,\narising, for example, from the gluon vertex corrections to Fig.~2.\nTo take into account these contributions we will\nuse the approach proposed in~[30]. It was demonstrated\nthat main part of the non-logarithmic loop corrections can be\nabsorbed in the so-called K-factor given by the expression\n$$\n K(q + \\bar q^\\prime \\to W^\\pm\/Z^0) \\simeq \\exp \\left[C_F {\\alpha_s(\\mu^2)\\over 2 \\pi} \\pi^2 \\right], \\eqno(27)\n$$\n\n\\noindent\nwhere color factor $C_F = 4\/3$. A particular choice\n$\\mu^2 = {\\mathbf p}_T^{4\/3} m^{2\/3}$ has been proposed~[22, 30]\nto eliminate sub-leading logarithmic terms.\nWe choose this scale to evaluate the strong coupling constant \n$\\alpha_s(\\mu^2)$ in~(27).\n\nWe begin the discussion by presenting a comparison\nbetween the different contributions to the $W^\\pm\/Z^0$\ntotal cross section. The solid, dashed and\ndotted histograms in Figs.~3 --- 6 represent\nthe $g^* + g^* \\to W^\\pm\/Z^0 + q \\bar q^\\prime$,\n$q + q^* \\to W^\\pm\/Z^0 + q^\\prime$ and \n$q + \\bar q^\\prime \\to W^\\pm\/Z^0$ contributions\nto the rapidity distributions of gauge boson calculated\nat the Tevatron (Figs.~3 and~4) and LHC conditions\n(Figs.~5 and~6). It is important\nthat in the last two subprocesses we take into account only\nthe valence quarks within the usual collinear \napproximation. For illustration, we used here\nthe KMR unintegrated gluon density.\nWe found that the role of the gluon-gluon fusion\nsubprocess is greatly increased at the LHC energy:\nit contributes only about 2 or 3\\% of the valence quark \ncomponent at the Tevatron and more than 40\\% at \nthe LHC. Moreover, in the last case it \ndominates over the valence contributions at the \ncentral rapidities. The contribution of the valence quark-antiquark \nannihilation subprocess is important at the Tevatron and \ngives only a few percents at the LHC energy.\nAs expected, the contribution of the $q + g^* \\to W^\\pm\/Z^0 + q^\\prime$\nsubprocess is significant in the forward rapidity region, $|y| > 2$.\nAt this point, we can conclude that the gluon-gluon\nfusion becomes an important production mechanism \nat high energies and therefore should be taken into\naccount in the calculations. However, we would like to note that \nthere is an additional contribution which is not included\nin the simple decomposition scheme proposed above.\nAs it was mentioned above, in this scheme \nit was assumed that sea quarks appear only at last \ngluon splitting and there is no contribution from the \nquarks coming from the earlier steps of the evolution\n(and we absorb last step of the \ngluon ladder into hard matrix element \n$g^* + g^* \\to W^\\pm\/Z^0 + q \\bar q^\\prime$). \nIt is not clear in advance, whether\nthe last gluon splitting dominates or not.\nIn order to model this additional component, we have\nrepeated the calculations using the KMR unintegrated quark \ndensities~(20) and the quark-antiquark annihilation \n$q + \\bar q^\\prime \\to W^\\pm\/Z^0$ matrix element.\nBut in these evaluations we omited the last term and keep only sea quark \nin first term of~(20). Thus, we switch\noff the pure gluon component of the sea quark distributions\nand remove the valence quarks from the evolution ladder.\nIn this way only the contributions to the $f_q(x,{\\mathbf k}_T^2,\\mu^2)$ \noriginating from the earlier (involving quarks) evolution steps\nare taken into account. \nSo, the dash-dotted histograms in Figs.~3 ---~6 represent\nthe results of our calculations. We have found the\nsignificant (by about of 50\\%) enhancement of the cross sections\nat both the Tevatron and LHC conditions.\nTherefore in all calculations below we will consider this\nmechanism as an additional production one.\nFinally, taking into account all described above components,\nwe can conclude that the gluon-gluon fusion contributes about\n$\\sim 1$\\% to the total cross section at Tevatron and up to $\\sim 25$\\%\nat LHC energies. \n\nNow we turn to the transverse momentum distributions of\nthe $W^\\pm$ and $Z^0$ bosons.\nThe experimental data for the transverse momentum\ndistributions come from both the D$\\oslash$~[8, 9] and CDF~[4] \ncollaborations at Tevatron. These data were obtained at center-of-mass\nenergy $\\sqrt s = 1800$~GeV. Measurements were made for $W \\to l\\nu$\nand $Z \\to l^+l^-$ decays; so that we should multiply our\ntheoretical predictions by the relevant branching fractions \n$f(W \\to l\\nu)$ and $f(Z \\to l^+l^-)$. \nThese branching fractions \nwere set to $f(W \\to l\\nu) = 0.1075$ and $f(Z \\to l^+l^-) = 0.03366$~[40].\nIn Figs.~7 --- 9 we display a comparison of the calculated \ndifferential cross sections $d\\sigma\/dp_T$ of the\n$W^\\pm$ and $Z^0$ boson production\nwith the experimental data~[4, 8, 9] in the low $p_T$ region, namely $p_T < 20$~GeV.\nNext, in Figs.~10 --- 12,\nwe demonstrate the $W^\\pm$ and $Z^0$ transverse momentum\ndistributions in the intermediate and high $p_T$ regions.\nAdditionally, in Figs.~13 and 14, we plot the normalized \ndifferential cross section\n$(1\/\\sigma)\\,d\\sigma\/dp_T$ of the $W^\\pm$ boson production.\nThe solid and dashed histograms correspond to the \nresults obtained with the CCFM and KMR unintegrated gluon densities, \nrespectively. All contributions discussed above are taken\ninto account. The dotted histograms were obtained using\nthe quark-antiquark annihilation matrix element \nconvoluted with the KMR unintegrated quark distributions\nin a proton (in this case the transverse mometum of the \nproduced vector boson is defined by the transverse momenta\nof the incoming quarks).\nWe found an increase in the cross section calculated\nin the proposed decomposition scheme (where only the unintegrated \ngluon densities used).\nIn this scheme, we obtain that both the CCFM and KMR gluon distributions\nreproduce well the Tevatron data within the uncertainties,\nalthough the KMR gluon tends to slightly underestimate\nthe data in the low $p_T$ region. \nThe difference between the solid and dashed histograms in Figs.~7 --- 14\nis due to different behaviour of the CCFM and KMR gluon densities.\nThe predictions based of the \nquark-antiquark annihilation subprocess lie\nbelow the experimental data but agree with them in shape.\nThis observation coincides with the one from~[30] where\nan additional factor of about 1.2 was introduced\nto eliminate the visible disagreement between the data and \ntheory\\footnote{In Ref.~[30] authors have explained the origin of \nthis extra factor by the fact that the input\nparton densities (used to determine the unintegrated ones)\nshould themselves be determined from data using\nthe appropriate non-collinear formalism.}. \n\nAn additional possibility to distinguish\nthe two calculation schemes comes from the\nstudying of the ratio of the $W^\\pm$ and $Z^0$ boson \ncross sections. In fact, since $W^\\pm$ and $Z^0$ production\nproperties are very similar, as\nthe transverse momentum of the vector boson becomes\nsmaller, the radiative corrections affecting the individual distributions and \nthe cross sections of hard process are\nfactorized and canceled in this ratio.\nTherefore the results of calculation of this ratio \nin the decomposition scheme (where\nthe ${\\cal O}(\\alpha \\alpha_s)$ and ${\\cal O}(\\alpha \\alpha_s^2)$ \nsubprocesses are taken into account) and the predictions \nbased on the ${\\cal O}(\\alpha)$\nquark-antiquark annihilation should\ndiffer from each other at moderate and high $p_T$ values.\nThis fact is clearly illustrated in Fig.~15 where the ratio of\n$W^\\pm$ and $Z^0$ cross sections as a function of the\ntransverse momentum is displayed.\nAs it was expected, there is practically no difference\nbetween all plotted histograms in the low $p_T$ region.\n\nAs a final point of our study, we discuss the scale dependence of\nour results. In Figs.~16 and~17 we show the total\ncross section of the $W^\\pm$ and $Z^0$ boson \nproduction as a function of the total center-of-mass \nenergy $\\sqrt s$. Here, the solid and dotted histograms \ncorrespond to the results obtained with the CCFM \nand KMR unintegrated gluon densities, respectively. The\nthe upper and lower dashed histograms \ncorrespond to the scale variations in the CCFM gluon density \nas it was described above. We find that the\nscale uncertainties are the same order approximately \nas the uncertainties coming from the unintegrated\ngluon distributions. This fact is the typical one for the\nleading-order $k_T$-factorization calculations. \nOur predictions for the $W^\\pm$ and $Z^0$ boson \ntotal cross section agree well with the data\nin a wide $\\sqrt s$ range.\n\n\\section{Conclusions} \\indent \n\nWe have studied the production of \nelectroweak gauge bosons in hadronic collisions at high energies\nin the $k_T$-factorization approach of QCD.\n Our consideration is based on the scheme which provides solid \ntheoretical grounds \nfor adequately taking into account the effects of initial parton \nmomentum. The central part of our derivation is the off-shell gluon-gluon \nfusion subprocess $g^* + g^* \\to W^\\pm\/Z^0 + q \\bar q^\\prime$. \nAt the price of considering the corresponding matrix element \nrather than $q + \\bar q^\\prime \\to W^\\pm\/Z^0$ one, \nwe have reduced the problem of unknown unintegrated quark distributions to the \nproblem of gluon distributions. \nThis way enables us with making comparisons between the different parton \nevolution schemes and parametrizations of parton densities. \nSince the gluons are only responsible for the appearance of sea, but \nnot valence quarks, the contribution from the valence quarks has been \ncalculated separately. Having in mind that the valence quarks are only \nimportant at large $x$, where the traditional DGLAP evolution is accurate and \nreliable, we have calculated this contribution within the usual collinear \nscheme based on $q + g^* \\to W^\\pm\/Z^0 + q^\\prime$ and \n$q + \\bar q^\\prime \\to W^\\pm\/Z^0$ partonic subprocesses \nand on-shell parton densities. \n\nWe have studied in detail the different production \nmechanisms of $W^\\pm$ and $Z^0$ bosons. \nWe find that the off-shell gluon-gluon fusion\ngives $\\sim 1$\\% and $\\sim 25$\\% contributions\nto the inclusive $W^\\pm\/Z^0$ production cross sections\nat the Tevatron and LHC.\nSpecially we simulate the contribution from\nthe quarks involved into the earlier steps of the \nevolution cascade (i.e., into the second-to-last, third-to-last and \nother gluon splittings) and find that these quarks\nplay an important role at both the Tevatron and LHC energies. \nIt was demonstrated that corresponding corrections\nshould be taken into accout in the numerical calculations \nwithin the $k_T$-factorization approach.\n\nWe have calculated the total and differential $W^\\pm$ and $Z^0$ \nproduction cross sections and have made comparisons with the Tevatron \ndata. In the numerical analysis we have used \nthe unintegrated gluon densities obtained from the \nCCFM evolution equation and from the KMR prescription.\nOur numerical results agree well with\nthe experimental data.\n\nWhen the present paper was ready for publication,\nwe have learned about the results obtained by\nM.~Deak and F.~Schwennsen~[41], who used the same theoretical approach,\nbut focused attention on slightly different aspects of the problem.\nThese authors concentrate on the associated $W^\\pm\/Z^0$ production\nwith heavy quark pairs (mainly on the $Z b\\bar b$ final \nstate), where the gluon-gluon fusion subprocess dominates.\nIn additional to that, we consider quark subprocesses, which \nare important for inclusive $W^\\pm\/Z^0$ production. We show that the \nexperimental data can be described with taking quark contributions into account.\n\n\\section{Acknowledgements} \\indent \n\nWe thank H.~Jung for his encouraging interest, very helpful discussions\nand for providing the CCFM code for \nunintegrated gluon distributions. We\nthank M.~Deak and F.~Schwennsen for letting us\nknow about the results of their work before\npublication and useful remarks.\nThe authors are very grateful to \nDESY Directorate for the support in the \nframework of Moscow --- DESY project on Monte-Carlo\nimplementation for HERA --- LHC.\nA.V.L. was supported in part by the grants of the president of \nRussian Federation (MK-438.2008.2) and Helmholtz --- Russia\nJoint Research Group.\nAlso this research was supported by the \nFASI of Russian Federation (grant NS-8122.2006.2)\nand the RFBR fundation (grant 08-02-00896-a).\n\n\\section{Appendix A} \\indent \n\nHere we present the compact analytic expressions for the \ncross section of the $W^\\pm\/Z^0$ production\nvia $q + \\bar q^\\prime \\to W^\\pm\/Z^0$ subprocess\nin the $k_T$-factorization approach. \nLet us define the transverse momenta and azimuthal\nangles of incoming quark $q$ and antiquark $\\bar q^\\prime$ as \n${\\mathbf k}_{1T}$ and ${\\mathbf k}_{2T}$ and\n$\\phi_1$ and $\\phi_2$, respectively. \nThe produced vector boson has the transverse momentum\n${\\mathbf p}_{T}$ (${\\mathbf p}_{T} = {\\mathbf k}_{1T} + {\\mathbf k}_{2T}$) \nand center-of-mass rapidity $y$.\nThe $W^\\pm\/Z^0$ production cross section can be written as\n$$\n \\displaystyle \\sigma(p + \\bar p \\to W^\\pm\/Z^0 + X) = \\sum_{q} \\int {2 \\pi \\over (x_1 x_2 s)^2} |\\bar {\\cal M}(q + \\bar q^\\prime \\to W^\\pm\/Z^0)|^2 \\times \\atop \n \\displaystyle \\times f_q(x_1,{\\mathbf k}_{1 T}^2,\\mu^2) f_q(x_2,{\\mathbf k}_{2 T}^2,\\mu^2) d{\\mathbf k}_{1 T}^2 d{\\mathbf k}_{2 T}^2 dy {d\\phi_1\\over 2\\pi} {d\\phi_2\\over 2\\pi}, \\eqno(A.1)\n$$\n\n\\noindent \nwhere $f_q(x,{\\mathbf k}_{T}^2,\\mu^2)$ is the unintegrated quark\ndistributions given by~(20). In the high-energy limit the \nfractions $x_1$ and $x_2$ of the\ninitial proton's longitudinal momenta are given by\n$$\n x_1 \\sqrt s = m_T\\,e^y, \\quad x_2 \\sqrt s = m_T \\, e^{-y}. \\eqno(A.2)\n$$\n\n\\noindent \nwhere $m_T$ is the transverse mass of the vector boson.\nThe squared matrix element $|\\bar {\\cal M}(q + \\bar q^\\prime \\to W^\\pm)|^2$ \nsummed over final polarization states and averaged over initial ones is\n$$\n |\\bar {\\cal M}(q + \\bar q^\\prime \\to W^\\pm)|^2 = - {e^2\\over 72 m_W^2 \\sin^2 \\theta_W} \\left[(m_1^2 - m_2^2)^2 + m_W^2 (m_1^2 + m_2^2) - 2 m_W^4\\right], \\eqno(A.3)\n$$\n\n\\noindent \nwhere $m_1$ and $m_2$ are the masses of incoming quarks.\nIn the case of $Z^0$ boson production,\nthe squared matrix element $|\\bar {\\cal M}(q + \\bar q \\to Z^0)|^2$ \nsummed over final polarization states and averaged over initial ones is\n$$\n \\displaystyle |\\bar {\\cal M}(q + \\bar q \\to Z^0)|^2 = {2 e^2\\over 9 \\sin^2 2 \\theta_W} \\times \\atop\n \\displaystyle \\times \\left[ (m_Z^2 - m^2)\\left[I_3^{(q)}\\right]^2 + 2 e_q (2 m^2 + m_Z^2) \\sin^2 \\theta_W \\left(e_q \\sin^2\\theta_W - I_3^{(q)}\\right) \\right], \\eqno(A.4)\n$$\n\n\\noindent \nwhere $m$, $e_q$ and $I_3^{(q)}$ is the mass, fractional electric charge and\nweak isospin of incoming quark. Note that there is no obvious \ndependence on the transverse momenta of the initial quark and antiquark.\nHowever, this dependence is present because \nthe true off-shell kinematics is used. \nIn particular, the incident parton momentum fractions\n$x_1$ and $x_2$ have some ${\\mathbf k}_{T}$ \ndependence. If we take the limit ${\\mathbf k}_{1 T}^2 \\to 0$\nand ${\\mathbf k}_{2 T}^2 \\to 0$,\nthen we recover the relevant expression in the standard \ncollinear approximation of QCD.\n\n\\section{Appendix B} \\indent \n\nHere we present the analytic expressions for the \ncross section of the $W^\\pm\/Z^0$ production\nvia $q + g^* \\to W^\\pm\/Z^0 + q^\\prime$ subprocess\nin the $k_T$-factorization approach. \nLet us define the transverse momenta and azimuthal\nangles of incoming quark and off-shell gluon as \n${\\mathbf k}_{1T}$ and ${\\mathbf k}_{2T}$ and\n$\\phi_1$ and $\\phi_2$, respectively. \nIn the following, $\\hat s$, $\\hat t$ and $\\hat u$ are usual Mandelstam \nvariables for $2 \\to 2$ subprocess.\nThe $W^\\pm\/Z^0$ production cross section can be written as follows:\n$$\n \\displaystyle \\sigma(p + \\bar p \\to W^\\pm\/Z^0 + X) = \\sum_{q} \\int {1\\over 16\\pi (x_1 x_2 s)^2} |\\bar {\\cal M}(q + g^* \\to W^\\pm\/Z^0 + q^\\prime)|^2 \\times \\atop \n \\displaystyle \\times f_q(x_1,{\\mathbf k}_{1 T}^2,\\mu^2) f_g(x_2,{\\mathbf k}_{2 T}^2,\\mu^2) d{\\mathbf k}_{1 T}^2 d{\\mathbf k}_{2 T}^2 d{\\mathbf p}_{T}^2 dy dy^\\prime {d\\phi_1\\over 2\\pi} {d\\phi_2\\over 2\\pi}, \\eqno(B.1)\n$$\n\n\\noindent\nwhere $y^\\prime$ is the rapidity of the final quark $q^\\prime$. The\nfractions $x_1$ and $x_2$ of the\ninitial proton's longitudinal momenta are given by\n$$\n x_1 \\sqrt s = m_{T}\\,e^y + m_{T}^\\prime \\,e^{y^\\prime}, \\quad x_2 \\sqrt s = m_{T}\\, e^{-y} + m_{T}^\\prime \\,e^{-y^\\prime}. \\eqno(B.2)\n$$\n\n\\noindent \nwhere $m_T$ and $m_T^\\prime$ are the transverse masses of the \nvector boson and final quark $q^\\prime$.\nIf we take the limit ${\\mathbf k}_{1 T}^2 \\to 0$ and ${\\mathbf k}_{2 T}^2 \\to 0$,\nthen we recover the relevant expression in the usual \ncollinear approximation.\nThe squared matrix elements $|\\bar {\\cal M}(q + g^* \\to W^\\pm + q^\\prime)|^2$ \nand $|\\bar {\\cal M}(q + g^* \\to Z^0 + q)|^2$ summed \nover final polarization states and averaged over initial ones are\n$$\n |\\bar {\\cal M}(q + g^* \\to W^\\pm + q^\\prime)|^2 = {e^2 g^2 \\over 192 \\sin^2 \\theta_W} {F_W \\over (m_1^2 - \\hat s)^2 (m_2^2 - \\hat t)^2 m_W^2}, \\eqno(B.3)\n$$\n$$\n |\\bar {\\cal M}(q + g^* \\to Z^0 + q)|^2 = {2 e^2 g^2 \\over 3 \\sin^2 2 \\theta_W} {F_Z \\over (m^2 - \\hat s)^2 (m^2 - \\hat t)^2 m_Z^2}, \\eqno(B.4)\n$$\n\n\\noindent where\n$$\n F_W = -8 (m_1^8 (3 m_2^2 - \\hat t) + m_1^6 (m_2^4 + m_2^2 (2 m_W^2 - 5 \\hat s - 7 \\hat t) + \\hat t (\\hat s + 2 \\hat t)) +\n$$\n$$ \n m_1^4 (m_2^6 + m_2^4 (8 m_W^2 - 3 (\\hat s + \\hat t)) + m_2^2 (-6 m_W^4 + 3 \\hat s^2 + 13 \\hat s \\hat t + 5 \\hat t^2 - 6 m_W^2 (\\hat s + \\hat t)) - \n$$\n$$\n \\hat t (-2 m_W^4 - 2 m_W^2 \\hat s + (\\hat s + \\hat t)^2)) + m_1^2 (3 m_2^8 + m_2^6 (2 m_W^2 - 7 \\hat s - 5 \\hat t) + \n$$\n$$\n m_2^4 (-6 m_W^4 + 5 \\hat s^2 + 13 \\hat s \\hat t + 3 \\hat t^2 - 6 m_W^2 (\\hat s + \\hat t)) + m_2^2 (4 m_W^6 - \\hat s^3 - 11 \\hat s^2 \\hat t - 11 \\hat s \\hat t^2 - \\hat t^3 + \n$$\n$$\n 6 m_W^4 (\\hat s + \\hat t) + 6 m_W^2 (\\hat s^2 + \\hat t^2)) + \\hat t (-4 m_W^6 + 2 m_W^4 \\hat s + \\hat s (\\hat s + \\hat t)^2 - \n$$\n$$\n 2 m_W^2 (2 \\hat s^2 - \\hat s \\hat t + \\hat t^2))) + \\hat s (-m_2^8 + m_2^6 (2 \\hat s + \\hat t) + 2 m_W^2 \\hat t (2 m_W^4 + \\hat s^2 + \\hat t^2 - 2 m_W^2 (\\hat s + \\hat t)) + \n$$\n$$\n m_2^4 (2 m_W^4 + 2 m_W^2 \\hat t - (\\hat s + \\hat t)^2) + m_2^2 (-4 m_W^6 + 2 m_W^4 \\hat t + \\hat t (\\hat s + \\hat t)^2 - \n$$\n$$\n 2 m_W^2 (\\hat s^2 - \\hat s \\hat t + 2 \\hat t^2))) + (m_1^8 + m_2^8 + m_1^6 (m_W^2 - 2 (\\hat s + \\hat t)) + m_2^6 (m_W^2 - 2 (\\hat s + \\hat t)) + \n$$\n$$\n m_2^4 (-2 m_W^4 - 2 m_W^2 \\hat t + (\\hat s + \\hat t)^2) + m_2^2 m_W^2 (5 \\hat s^2 + 4 \\hat s \\hat t + \\hat t^2 + m_W^2 (-8 \\hat s + 4 \\hat t)) - \n$$\n$$\n 2 m_W^2 (2 \\hat s \\hat t (\\hat s + \\hat t) + m_W^2 (\\hat s^2 - 4 \\hat s \\hat t + \\hat t^2)) + m_1^4 (-2 m_2^4 - 2 m_W^4 - 2 m_W^2 \\hat s + (\\hat s + \\hat t)^2 + \n$$\n$$\n m_2^2 (m_W^2 + 2 (\\hat s + \\hat t))) + m_1^2 (m_W^2 (\\hat s^2 + 4 m_W^2 (\\hat s - 2 \\hat t) + 4 \\hat s \\hat t + 5 \\hat t^2) + m_2^4 (m_W^2 + 2 (\\hat s + \\hat t)) + \n$$\n$$\n m_2^2 (8 m_W^4 - 6 m_W^2 (\\hat s + \\hat t) - 2 (\\hat s + \\hat t)^2))) ( - {\\mathbf k}_{2T}^2 ) + 4 m_W^2 (m_1^2 - \\hat s) (m_2^2 - \\hat t) {\\mathbf k}_{2T}^4), \\eqno(B.5)\n$$\n$$\n F_Z = -2 e_q I_3^{(q)} m_Z^2 \\sin^2 \\theta_W (6 m^8 - \\hat s \\hat t (2 m_Z^4 + \\hat s^2 + \\hat t^2 - 2 m_Z^2 (\\hat s + \\hat t)) - \n$$\n$$\n m^4 (2 m_Z^4 + 3 \\hat s^2 + 14 \\hat s \\hat t + 3 \\hat t^2 - 2 m_Z^2 (\\hat s + \\hat t)) + m^2 (\\hat s^3 - 8 m_Z^2 \\hat s \\hat t + 7 \\hat s^2 \\hat t + \n$$\n$$\n 7 \\hat s \\hat t^2 + \\hat t^3 + 2 m_Z^4 (\\hat s + \\hat t))) + 2 e_q^2 m_Z^2 \\sin^4 \\theta_W (6 m^8 - \\hat s \\hat t (2 m_Z^4 + \\hat s^2 + \\hat t^2 - 2 m_Z^2 (\\hat s + \\hat t)) - \n$$\n$$\n m^4 (2 m_Z^4 + 3 \\hat s^2 + 14 \\hat s \\hat t + 3 \\hat t^2 - 2 m_Z^2 (\\hat s + \\hat t)) + m^2 (\\hat s^3 - 8 m_Z^2 \\hat s \\hat t + 7 \\hat s^2 \\hat t + 7 \\hat s \\hat t^2 + \\hat t^3 + \n$$\n$$\n 2 m_Z^4 (\\hat s + \\hat t))) + \\left[I_3^{(q)}\\right]^2 (-4 m^{10} + m^8 (-6 m_Z^2 + 8 (\\hat s + \\hat t)) - m_Z^2 \\hat s \\hat t (2 m_Z^4 + \\hat s^2 + \\hat t^2 - \n$$\n$$\n 2 m_Z^2 (\\hat s + \\hat t)) + m^6 (6 m_Z^4 - 5 \\hat s^2 - 14 \\hat s \\hat t - 5 \\hat t^2 + 6 m_Z^2 (\\hat s + \\hat t)) + \n$$\n$$\n m^4 (-2 m_Z^6 + \\hat s^3 + 7 \\hat s^2 \\hat t + 7 \\hat s \\hat t^2 + \\hat t^3 - 4 m_Z^4 (\\hat s + \\hat t) - m_Z^2 (3 \\hat s^2 + 2 \\hat s \\hat t + 3 \\hat t^2)) + \n$$\n$$\n m^2 (-2 m_Z^4 \\hat s \\hat t + 2 m_Z^6 (\\hat s + \\hat t) - \\hat s \\hat t (\\hat s + \\hat t)^2 + m_Z^2 (\\hat s^3 + \\hat s^2 \\hat t + \\hat s \\hat t^2 + \\hat t^3))) +\n$$\n$$ \n m_Z^2 (2 e_q I_3^{(q)} \\sin^2 \\theta_W (2 m^4 (m_Z^2 - \\hat s - \\hat t) - 2 \\hat s \\hat t (\\hat s + \\hat t) - m_Z^2 (\\hat s^2 - 4 \\hat s \\hat t + \\hat t^2) - \n$$\n$$\n 2 m^2 (-4 \\hat s \\hat t + m_Z^2 (\\hat s + \\hat t))) + 2 e_q^2 \\sin^4 \\theta_W (2 \\hat s \\hat t (\\hat s + \\hat t) + 2 m^4 (-m_Z^2 + \\hat s + \\hat t) + \n$$\n$$\n m_Z^2 (\\hat s^2 - 4 \\hat s \\hat t + \\hat t^2) + 2 m^2 (-4 \\hat s \\hat t + m_Z^2 (\\hat s + \\hat t))) + \\left[I_3^{(q)}\\right]^2 (-2 m^6 + 2 \\hat s \\hat t (\\hat s + \\hat t) + \n$$\n$$\nm_Z^2 (\\hat s^2 - 4 \\hat s \\hat t + \\hat t^2) + m^4 (-2 m_Z^2 + 4 (\\hat s + \\hat t)) + m^2 (-3 \\hat s^2 - 4 \\hat s \\hat t - 3 \\hat t^2 + \n$$\n$$\n 2 m_Z^2 (\\hat s + \\hat t)))) ( - {\\mathbf k}_{2T}^2 ) - 2 m_Z^2 (m^2 - \\hat s) (\\left[I_3^{(q)}\\right]^2 - \n$$\n$$\n 2 e_q I_3^{(q)} \\sin^2 \\theta_W + 2 e_q^2 \\sin^4 \\theta_W) (m^2 - \\hat t) {\\mathbf k}_{2T}^4. \\eqno(B.6)\n$$\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nRecently the Alpha Magnetic Spectrometer (AMS-02) on the International\nSpace Station has reported that the antiproton to proton ratio stays\nconstant from 20 GeV to 450 GeV kinetic energy~\\cite{AMS-02DATA}.\nThis behavior cannot be explained by the secondary antiprotons from\ncollisions of ordinary cosmic rays with interstellar\nmedium~\\cite[e.g.,][]{Kachelriess:2015wpa}. This suggests a new\nsource such as astrophysical accelerators and annihilating or decaying\ndark matter, although there are still uncertainties in the background\nmodeling~\\cite{Giesen:2015ufa}.\n\nThe excess of antiprotons looks surprisingly similar to what we\npredicted~\\cite{Fujita:2009wk} when the PAMELA experiment detected the\npositron excess~\\cite{Adriani:2008zr} and the Fermi, HESS, and\nATIC\/PPB-BETS experiments observed the electron\nanomaly~\\cite{cha08,Abdo:2009zk,Aharonian:2009ah}. We considered\nrecent supernova explosions in a dense gas cloud (DC) near the Earth.\nThe antiprotons and positrons are produced as secondaries by the $pp$\ncollisions between cosmic-ray protons accelerated by the supernova\nremnant (SNR) and target protons in a DC which surrounds the\nSNRs~\\cite{Fujita:2009wk}. Since the fundamental process determines\nthe branching fraction, the positron excess should accompany the\nantiproton excess.\n\nThere are several variants of such a hadronic model, e.g., the\nreacceleration of secondaries by SNR shocks~\\cite{Blasi:2009bd} or the\nnon-standard propagation that increases secondaries from ordinary\ncosmic-ray collisions with interstellar\nmatter~\\cite{Blum:2013zsa,Cowsik:2013woa,Guo:2014laa}. \nSince the element ratio is the same as the\nordinary cosmic rays in these models, \nthe ratio of secondaries (e.g., Li, Be, B) to\nprimaries (C, N, O) must also rise with energy beyond $\\sim 100$ GeV,\nwhich is not\nobserved yet~\\cite{Mertsch:2009ph,Mertsch:2014poa,Cholis:2013lwa}. In\ncontrast to these models, our model can accommodate the observations\nas shown below.\n\nBefore AMS-02, the antiproton observations were consistent with the\nsecondary background~\\cite{Adriani:2008zq}. Thus the leading models\nfor the positron excess are leptonic, such as pulsars and leptophillic\ndark\nmatter~\\cite[e.g.,][]{Serpico:2011wg,Fan:2010yq,Ioka:2008cv,Kashiyama:2010ui}.\nThe pulsars cannot usually explain the antiproton excess. On the\nother hand, the dark matter for the positron excess is now severely\nconstrained by other messengers such as gamma-rays and cosmic\nmicrowave background~\\cite[e.g.,][]{Ackermann:2015zua,Ade:2015xua},\nand hence we may need fine tunings in dark matter models to reproduce\nboth the antiproton and positron excesses~\\cite{Giesen:2015ufa,DM2015,Evoli:2015vaa,Kohri:2013sva}.\n\nFollowing Occam's razor, we reexamine our nearby SNR model \nand simultaneously fit the antiproton fraction, positron fraction, \nand total electron and positron flux in light of new AMS-02 data.\nIn particular the $pp$ collisions give the correct branching fraction\nfor the observed positron to antiproton ratio.\nThroughout this paper we adopt units of $c = \\hbar = k_B = 1$.\n\n\\section{Supernova explosions in a Dense Cloud}\n\nHere we consider supernova explosions which occurred around $\\sim\n10^{5} - 10^{6} $ years ago in a DC. We assume the\nDC is located at around $\\sim$~100--200~pc away from the Earth \nlike the progenitor DCs that produced the Local Bubble (LB) or\nLoop~I. In general a massive star tends to be born in a giant\nDC~\\cite{lar82} which explodes as a supernova. In this paper, we\nassume that the Giant DC is ionized and the temperature is\napproximately $\\sim 10^4$~K at the time of its\nexplosion~\\cite{whi79}. The shock of an SNR accelerates protons, which\nproduce copious energetic mesons (pions and kaons, etc.) and baryons\n(antiproton, proton, antineutron, neutron, etc.) through the $pp$\ncollisions in the surrounding DC. The mesons further decay into\nenergetic positrons, electrons, gamma-rays, and neutrinos. In total\nthose local secondary particles can be observed at the Earth as\ncosmic-rays in addition to the standard background components.\n\nThe energy spectrum of the accelerated protons is parametrized by\n\\begin{equation}\n\\label{eq:NE}\n \\frac{dn_p}{dE_p} \\propto E_p^{-s} e^{- \\frac{E_p}{E_{\\rm max, p}}},\n\\end{equation}\nwhere $s$ is the spectral index. \nThe age of SNR, $t_{\\rm age}$, approximately determines the maximum\nenergy~\\cite{yam06},\n\\begin{equation}\n E_{\\rm max,p} \\sim 2 \\times10^2 v_{s,8}^2\n\\left(\\frac{B_{\\rm d}}{10~\\rm\\mu G}\\right)\n\\left(\\frac{t_{\\rm age}}{10^5{\\rm yr}}\\right)~{\\rm TeV}~,\n\\label{eq:Emax_p}\n\\end{equation}\nwhere the shock velocity, $v_s$, is $v_{s,8}=v_s\/10^8$cm~s$^{-1} \\sim\nO(1)$. $B_{\\rm d}$ is the downstream magnetic field. We take\nthe minimum energy of the protons to be its rest mass. We assume that\nthe supernova explodes at the center of a DC for simplicity. In\naddition, we also assume that the acceleration stops when the Mach\nnumber of the shock decreases to 7~\\cite{Fujita:2009wk}. We define\nthis time as the acceleration time\n$t_{\\rm acc}=t_{\\rm age}$, \nand the energy spectrum at\nthis time is given by $s \\sim 2$ and $E_{\\rm max,p} \\sim 120$\nTeV~\\cite{Fujita:2009wk}. The SNR continues to expand even at $t_{\\rm\n age} > t_{\\rm acc}$.\n\nThe radius is $50$~pc at\n$t_{\\rm age}=5\\times 10^5 $~yr. Since it is comparable to the size of\na giant DC, $R_{\\rm DC}$,~\\cite{mck07} and the initial energy of the\nejecta from the supernovae is larger than the binding energy of a DC,\nthe cloud would be destroyed around this time. Until it is destroyed,\nthe DC is illuminated by the accelerated protons from the inside with\nthe spectrum of Eq. (\\ref{eq:NE}) given at $t_{\\rm age}\\sim t_{\\rm\n acc}$. The duration of the exposure, $t_{pp}$, could be approximated\nby the time elapsing from the explosion of the supernovae to the\ndestruction of the DC because the timescale $t_{\\rm acc}$ is\nshorter than $5\\times 10^5 $~yr. \n\n\nAfter the destruction of the DC, the produced charged particles such\nas $\\bar{p}$, $p$, $e^{+}$, or $e^{-}$ propagate through diffusion\nprocesses and reach to the Earth. Since we assume that the DC has\nalready been destroyed well before the present epoch, there are some\ndifferences in arrival times between those charged particles and\nmassless neutral particles such as photons. It should be a reasonable\nassumption that we would not detect any photon and neutrino signals\nfrom the DC $\\sim 10^{5-6}$ years after the destruction of the DC.\n\nWe have calculated spectra of those daughter particles through the\n$pp$ collisions by performing the PYTHIA Monte-Carlo event\ngenerator~\\cite{Sjostrand:2006za} (See~\\cite{yam06} for the\ndetails). Then we solve the diffusion equation of the charged particle\n``$i$'' ($i$ runs $\\bar{p}$, $p$, $e^{+}$, and $e^{-}$, ),\n\\begin{equation}\n \\label{eq:diff_eq}\n \\frac{\\partial f_{i}}{\\partial t} \n= K(\\varepsilon_{i}) \\Delta f_{i} +\n \\frac{\\partial}{ \\partial \\varepsilon_{i}} \\left[\n B(\\varepsilon_{i}) \nf_{i}\\right] + Q(\\varepsilon_{i})\n\\end{equation}\nwhere $f_{i}(t,{\\boldmath{x},\\varepsilon_{i}})$ is the distribution\nfunction of an $i$ particle, and $\\varepsilon_{i} = E_{i}\/{\\rm GeV}$\nwith $E_{i}$ being the energy of the $i$ particle. The flux is given\nby \n\\begin{eqnarray}\n \\label{eq:totalflux0}\n \\Phi_{i} (t,{\\boldmath{x},\\varepsilon_{i}}) = \\frac{1}{4\\pi} f_{i}.\n\\end{eqnarray}\nWe adopt a diffusion model 08-005 given in~\\cite{Moskalenko:1997gh}\nwith the diffusion coefficient,\n\\begin{equation}\n K(\\varepsilon_{e}) = K_{0}\n \\left(1 +\n\\frac{\\varepsilon_{e}}{3{\\rm GeV} }\n\\right)^{\\delta},\n\\end{equation}\nwith $K_{0} = 2 \\times 10^{28} {\\rm cm}^{2}{\\rm s}^{-1}$ and $\\delta =\n0.42$~\\cite{AMS-02:2013conf,Genolini:2015cta,Evoli:2015vaa}. The cooling rate through the synchrotron emission and the\ninverse Compton scattering is collectively parametrized to be~\\cite{Baltz:1998xv}\n\\begin{equation}\n B(\\varepsilon_{e}) \\sim 10^{-16} {\\rm s}^{-1} \\varepsilon_{e}^{2 }\n\\left[ 0.2\n\\left(\n\\frac{B_{\\rm diff} } {3 \\mu {\\rm G}}\n\\right)^{2} + 0.9 \n\\right],\n\\end{equation}\nwhere $B_{\\rm diff}$ is the magnetic field outside the DC. This set of\nthe parameters approximately corresponds to the MED model of the\ncosmic-ray propagation~\\cite{Bottino:2005xy}.\n\n\nIf we assume that the timescale of the production is shorter than that\nof the diffusion, $\\sim d^{2}\/(Kc)$ with $d$ being the distance to the\nsource, and the source of the daughter particles is spatially\nlocalized sufficiently, we can use the known analytical solution\nin~\\cite{Atoian:1995ux}. When the shape of the source spectrum is a\npower-law with an index $\\alpha$ to be\n\\begin{eqnarray}\n \\label{eq:Qdetail}\n Q =Q_{0}\\varepsilon^{-\\alpha}\\delta(\\boldmath{x})\\delta(t),\n\\end{eqnarray}\nthen the solution is given by\n\\begin{equation}\n \\label{eq:f_diff}\nf_{e}=\\frac{Q_0 }{\\pi^{3\/2} d_{\\rm diff}^3}\n\\varepsilon_e^{-\\alpha}\n\\left(1-\\frac{\\varepsilon_e}{\\varepsilon_{\\rm cut}}\\right)^{\\alpha-2}\ne^{-(\\frac{\\bar{d}} {d_{\\rm diff}})^2},\n\\end{equation}\nwhere $\\varepsilon_{\\rm cut} = \\varepsilon_e^2 \/ B t_{\\rm diff}$, and the\ndiffusion length is represented by\n\\begin{equation}\n d_{\\rm diff} = 2 \\sqrt{K t_{\\rm diff} \\frac{1 - (1 -\n\\frac{\\varepsilon_e} {\\varepsilon_{\\rm cut}})^{1 - \\delta}}{\n(1-\\delta) \n\\frac {\\varepsilon_e } {\\varepsilon_{\\rm cut}} }}\\:.\n\\end{equation}\n$\\bar{d}$ means the effective distance to the source by spatially\naveraging the distance to the volume element of the source, and we\nassume $\\alpha \\simeq s$. We approximately have\n\\begin{eqnarray}\n \\label{eq:Q0emalpha}\n Q_0\\varepsilon_{i}^{-\\alpha} \\sim V_{s}t_{pp} \\frac{d^{2}n_{i}}{dtdE_{i}} \n\\end{eqnarray}\nwith $V_{s}$ the source volume where\n\\begin{equation}\n \\frac{d^{2}n_{i}}{dtdE_{i}} = \\int d E_{p} n_{0} \\frac{dn_p}{dE_p} \\sum_{j}\ng_{j} \\frac{v_pd\\sigma_{j}}{dE_{i}} \\:.\n\\end{equation}\nThe differential cross section of the ``$j$''-mode for the production\nof the $i$ particle is represented to be\n$d\\sigma_{j}(E_{p},E_{i})\/dE_{i}$ with the multiplicity into the\n$j$-mode, $g_{j} = g_{j}(E_{p},E_{i})$. $v_p=v_p(E_p)$ is the velocity of the\nprimary proton. We also consider the free neutron (antineutron) decay\nfor the electron (positron) production process, which is not included\nin the original version of PYTHIA. The initial proton spectrum\n$\\frac{dn_p}{dE_p}$ can be obtained by a normalization to satisfy\n\\begin{equation}\n V_{s}\\int dE_{p} \\frac{dn_p}{dE_p}= E_{\\rm tot, p} .\n\\end{equation}\nFor the local propagation of protons and antiprotons, their cooling is\nnegligible unlike electrons and positrons. Additionally we can omit\nannihilations of antiprotons through scattering off the background\nprotons because the scattering rate is small. We can also omit\nconvection by interstellar turbulence\nwithin the galaxy. An analytical\nsolution for the proton and the antiproton is also given by the same\nequation as Eq.~(\\ref{eq:f_diff}) with a limit of\n$\\varepsilon_p\/\\varepsilon_{\\rm cut} = 0$.\n\n\\section{Antiproton and positron fittings}\n\n\n\n\\begin{figure}[t]\n \\begin{center}\n \\vspace{-0. cm}\n \\includegraphics[width=100mm]{pbar20151204.eps}\n \\vspace{-1. cm}\n \\caption{Antiproton fraction fitted to the data. The data points\n are taken from~\\cite{AMS-02DATA} for AMS-02, and\n from~\\cite{Adriani:2008zq} for PAMELA. The dotted line is plotted\n only by using the background flux~\\cite{Nezri:2009jd}. The\n shadow region represents the uncertainties of the background\n flux among the propagation models shown\n in~\\cite{AMS-02DATA}. Cosmic rays below an energy $\\lesssim$\n 10GeV are affected by the solar modulation. \n We choose the background line and its uncertainty band only\n for a demonstration purpose. This choice is not\n essential for our conclusion (See the text \n about Fig.~\\ref{fig:posipbar}).} \n \\end{center}\n\\label{fig:anti_p}\n\\end{figure}\n\n \n\nIn Fig.~1, we plot the antiproton fraction at the Earth\nin our model (See also a similar model named ``model B'' given in\nRef.~\\cite{Fujita:2009wk}). For the background flux, we adopted the\n20$\\%$ smaller value of the mean value shown\nin~\\cite{Nezri:2009jd}. Here, the radius of a spherical DC,\n$R_{\\rm DC}=40$~pc is adopted. The target proton density is set to be\n$n_0 = 50\\rm\\: cm^{-3}$. The spectral index $s=2.15$ \nand the maximum\nenergy, $E_{\\rm max}=100$~TeV, are assumed. We take the duration of the\n$pp$ collision to be $t_{pp}=2\\times 10^5$~yr. The total energy of the\naccelerated protons is assumed to be\n$E_{\\rm tot,p}=2.6 \\times 10^{50}$~erg. The distance to the front of\nthe DC is set to be $d=200$~pc. About the diffusion time of $e^-$ and\n$e^+$, $t_{\\rm diff}=2\\times 10^5$~yr is adopted. We take the magnetic\nfield outside the DC to be $B_{\\rm diff}=3\\rm\\: \\mu G$\n(See~\\cite{Fujita:2009wk} for the further details).\n\n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=100mm]{combC20151204.eps} \n \\vspace{-1.3 cm}\n \\caption{(a) Positron fraction (solid line), which includes the\n electrons and positrons coming from the DC and background\n electrons (dotted line, for example see\n Refs.~\\cite{Moskalenko:1997gh,Baltz:1998xv}). Filled circles\n correspond to the AMS-02 data~\\cite{AMS-02DATA,AMSe+e-14,AMSv1}\n and PAMELA data~\\cite{Adriani:2008zr} (b) Total electron and\n positron flux (solid line). The flux of the electrons and\n positrons created only in the DC (background) is plotted by the\n dashed (dotted) line. Observational data by AMS-02, Fermi, HESS,\n BETS, PPB-BETS, and\n ATIC2~\\cite{cha08,Abdo:2009zk,Aharonian:2009ah,AMSe++e-14} are\n also plotted. The shadow region represents the uncertainty of\n the HESS data.}\n \\end{center}\n \\label{fig:posifra}\n\\end{figure}\n\n\n\nIn Fig.~2, we also plot the positron fraction and the total\n$e^-$+$e^+$ flux. It is remarkable that we can automatically fit the\nobservational data of both the positron fraction and the total $e^-$ +\n$e^+$ flux by using the same set of the\nparameters~\\cite{Fujita:2009wk}. Here the cooling cutoff energy is\napproximately given by $\\varepsilon_{\\rm cut} = \\epsilon_e^2 \/ B t_{\\rm diff}\n\\sim$~1~TeV ($t_{\\rm diff} \/ 2 \\times 10^5$ yrs)$^{-1}$.\n\nThe positron fraction rises at higher energies than that of the\nantiproton fraction (Fig.~\\ref{fig:posifra}), because the spectral\nindex of the background antiproton is harder than that of the\nbackground positron. This comes from a difference between their\ncooling processes. Only for background positrons and electrons\nthe cooling is effective in the current situation.\n \n\nIn Fig.~\\ref{fig:posipbar}, we plot the positron to antiproton ratio\nas a function of the rigidity. Here the local components represent the\ncontribution of the nearby SNRs produced only by the $pp$ collisions.\nFrom this figure, we find that both of the positron and the\nantiproton can be consistently fitted only by adding astrophysical\nlocal contributions produced from the same $pp$ collision sources.\n\n\n\\begin{figure}[htbp]\n \\begin{center}\n \\includegraphics[width=100mm]{posipbar20151204.eps} \n \\vspace{-1.1 cm}\n \\caption{Positron to antiproton ratio as a function of the\n rigidity with adding the local components produced by the $pp$\n collisions occurred at SNRs near the Earth. The thick solid line\n represents the case of the total flux. From the upper right to\n the lower left, we plot the flux ratios of 1) the one at the\n source (without cooling), 2) only the local components, 3) the\n total of the local and the background components, and 4) only\n the background components. The observational data reported by\n AMS-02 are also plotted. }\n \\end{center}\n \\label{fig:posipbar}\n\\end{figure}\n\n\\section{Conclusion}\nWe have discussed the anomaly of the antiproton fraction\nrecently-reported by the AMS-02 experiment. By considering the same\norigin of the $pp$ collisions between cosmic-ray protons accelerated\nby SNRs and a dense cloud which surrounds the SNRs, we can fit the\ndata of the observed antiproton and positron simultaneously in the\nnatural model parameters. The observed fluxes of both antiprotons and\npositrons are consistent with our predictions shown in\nRef.~\\cite{Fujita:2009wk}.\n\nRegardless of the model details, the ratio of antiproton to positron\nis essentially determined by the fundamental branching fraction into\neach mode of the $pp$ collisions. Thus the observed antiproton excess\nshould entail the positron excess, and vice versa. This does not\ndepend on the propagation model since both antiparticles propagate in\na similar way below the cooling cutoff energy $\\sim$ TeV.\n\n\nThe cutoff energy of $e^-$ cooling marks the supernova age of\n$\\sim 10^{5}$ years~\\cite{Ioka:2008cv,Kawanaka:2009dk}, while we also\nexpect a $e^{+}$ cutoff. The trans-TeV energy will be probed by the\nfuture CALET, DAMPE and CTA experiments\n\\cite{Kobayashi:2003kp,Kawanaka:2010uj}. An anisotropy of the arrival\ndirection is also a unique signature, e.g., \\cite{Linden:2013mqa}. We\nmay estimate the amplitude of anisotropy as\n$\\delta_e \\sim 3d\/2ct_{\\rm diff} \\sim 0.5\\%$, which is below the upper\nlimits by Fermi observations~\\cite{Ackermann:2010}.\n\nThe boron to carbon ratio as well as the Li to carbon ratio have no\nclear excesses~\\cite{AMS-02DATA}. This suggests that the carbon\nfraction of the excess-making cosmic rays is smaller than that of the\nordinary cosmic rays. In general the supernovae in the DC would not\nbe the main channel of cosmic-ray production. Most of cosmic rays \nabove $\\sim 30$ GeV may\nbe produced in chemically enriched regions, such as superbubbles, as\nimplied by the hard spectrum of cosmic-ray\nhelium~\\cite{Ohira:2010eq}. Or the carbon abundance of the destroyed\nDC might happen to be lower than the Galactic\naverage~\\cite{Fujita:2009wk}.\n\nWe should be careful about the background systematics. In particular\nthe propagation uncertainties yield the largest\nerrors~\\cite{Yuan:2014pka,Giesen:2015ufa}. However, in the energy\nregion above $\\sim 100$ GeV where the background contribution is\nsmall, the observed positron to antiproton ratio is very close to the\nbranching fraction of the $pp$ collisions (source components in\nFig.~\\ref{fig:posipbar}).\n This fact is free from the background choice and partially supports our model.\n\n\\section*{ACKNOWLEDGMENTS}\nThis work was supported in part by Grant-in-Aid for Scientific\nresearch from the Ministry of Education, Science, Sports, and Culture\n(MEXT), Japan, Nos. 26105520, 15H05889 (K.K.), 26247042 (K.K. and K.I.),\n26287051, 24103006, 24000004 (K.I.), 15K05080 (Y.F.), and 15K05088\n(R.Y.). The work of K.K. and K.I. was also supported by the Center\nfor the Promotion of Integrated Science (CPIS) of Sokendai\n(1HB5804100). \n\n\n\\section*{Note added}\nWhile finalizing this manuscript, Ref.~\\cite{Kachelriess:2015oua}\nappeared which has some overlaps with this work.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzcpzi b/data_all_eng_slimpj/shuffled/split2/finalzzcpzi new file mode 100644 index 0000000000000000000000000000000000000000..44043890986f0fc8c4b34d9d520564c196849196 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzcpzi @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\nAutomatic speech recognition is a fundamental technology used on a daily basis by millions of end-users and businesses.\nApplications include automated phone systems, video captioning and voice assistants providing an intuitive and seemless interface between users and end systems.\nCurrent ASR approaches rely solely on utilizing audio input to produce transcriptions.\nHowever, the wide availability of cameras in smartphones and home devices acts as motivation to build AV-ASR models that rely on and benefit from multimodal input.\n\nTraditional AV-ASR systems focus on tracking the user's facial movements and performing lipreading to augment the auditory inputs \\cite{Potamianos97speakerindependent,mroueh2015deep,Tao2018AligningAF}.\nThe applicability of such models in real world environments is limited, due to the need for accurate audio-video alignment and careful camera placement.\nInstead, we focus on using video to contextualize the auditory input and perform multimodal grounding. For example, a basketball court is more likely to include the term ``lay-up'' whereas an office place is more likely include the term ``lay-off''.\nThis approach can boost ASR performance, while the requirements for video input are kept relaxed \\cite{how2-baseline, av-grounding-asr}. Additionally we consider a multiresolution loss that takes into account transcriptions at the character and subword level. We show that this scheme regularizes our model showing significant improvements over subword models. Multitask learning on multiple levels has been previously explored in the literature, mainly in the context of CTC~\\cite{sanabria2018hierarchical,krishna2018hierarchical,ueno2018acoustic}. A mix of seq2seq and CTC approaches combine word and character level~\\cite{kremer2018inductive,ueno2018acoustic} or utilize explicit phonetic information \\cite{toshniwal2017multitask,sanabria2018hierarchical}.\n\n\n\n\nModern ASR systems rely on end-to-end, alignment free neural architectures, i.e. CTC \\cite{ctc} or sequence to sequence models \\cite{rnnt,deep-cnn}.\nThe use of attention mechanisms significantly improve results in \\cite{attention-asr} and \\cite{las}.\nRecently, the success of transformer architectures for NLP tasks \\cite{transformer,bert,transformer-xl} has motivated speech researchers to investigate their efficacy in end-to-end ASR \\cite{karita2019comparative}.\nZhou et. al., apply an end-to-end transformer architecture for Mandarin Chinese ASR \\cite{zhou2018syllable}.\nSpeech-Transformer extends the scaled dot-product attention mechanism to $2$D and achieves competitive results for character level recognition \\cite{speech-transformer, improving-speech-transformer}.\nPham et. al. introduce the idea of stochastically deactivating layers during training to achieve a very deep model \\cite{very-deep-transformer}.\nA major challenge of the transformer architecture is the quadratic memory complexity as a function of the input sequence length.\nMost architectures employ consecutive feature stacking \\cite{very-deep-transformer} or CNN preprocessing \\cite{speech-transformer,karita2019comparative} to downsample input feature vectors.\n\\citet{vggtransformer} use a VGG-based input network to downsample the input sequence and achieve learnable positional embeddings.\n\n\n\\begin{figure*}[htb]\n\t\\centering\n\t\\includegraphics[width=.8\\linewidth]{figs\/schematic_2.pdf}\n\t\\caption{Overall system architecture. A cross-modal scaled dot-product attention layer is used to project the visual data into the audio feature space followed by an additive fusion.}\n\t\\label{fig:architecture}\n\\end{figure*}\n\nMultimodal grounding for ASR systems has been explored in \\cite{how2-baseline}, where a pretrained RNN-based ASR model is finetuned with visual information through Visual Adaptive Training.\nFurthermore, \\citet{av-grounding-asr} use a weakly supervised semantic alignment criterion to improve ASR results when visual information is present.\nMultimodal extensions of the transformer architecture have also been explored. These extensions mainly fuse visual and language modalities in the fields of Multimodal Translation and Image Captioning.\nMost approaches focus on using the scaled dot-product attention layer for multimodal fusion and cross-modal mapping.\n\\citet{afouras2018deep} present a transformer model for AV-ASR targeted for lip-reading in the wild tasks. It uses a self attention block to encode the audio and visual dimension independently. A decoder individually attends to the audio and video modalities producing character transcriptions. In comparison our study uses the video features to provide contextual information to our ASR.\n\\citet{libovicky-etal-2018-input} employ two encoder networks for the textual and visual modalities and propose four methods of using the decoder attention layer for multimodal fusion, with hierarchical fusion yielding the best results.\n\\citet{yu2019multimodal} propose an encoder variant to fuse deep, multi-view image features and use them to produce image captions in the decoder.\n\\citet{le-etal-2019-multimodal} use cascaded multimodal attention layers to fuse visual information and dialog history for a multimodal dialogue system.\n\\citet{cross-modal-transformer} present Multimodal Transformers, relying on a deep pairwise cascade of cross-modal attention mechanisms to map between modalities for multimodal sentiment analysis.\n\nIn relation to the previous studies, the main contributions of this study are a) a fusion mechanism for audio and visual modalities based on the crossmodal scaled-dot product attention, b) an end to end training procedure for multimodal grounding in ASR and c) the use of a multiresolution training scheme for character and subword level recognition in a seq2seq setting without relying on explicit phonetic information.\nWe evaluate our system in the $300$ hour subset of the How2 database \\cite{sanabria18how2}, achieving relative gains up to 3.76\\% with the addition of visual information. Further we show relative gains of 18\\% with the multiresolution loss. Our results are comparable to state-of-the-art ASR performance on this database. \n\n\n\n\\section{Proposed Method}\nOur transformer architecture uses two transformer encoders to individually process acoustic and visual information (Fig. \\ref{fig:architecture}).\nAudio frames are fed to the first set of encoder layers.\nWe denote the space of the encoded audio features as the audio space $\\mathbb{A}$.\nSimilarly, video features are projected to the video space $\\mathbb{V}$ using the second encoder network.\nFeatures from audio and visual space are passed through a tied feed forward layer that projects them into a common space before passing them to their individual encoder layers respectively. This tied embedding layer is important for fusion as it helps align the semantic audio and video spaces.\nWe then use a cross-modal attention layer that maps projected video representations to the projected audio space (Section \\ref{ssec:cm_attention}).\nThe outputs of this layer are added to the original audio features using a learnable parameter $\\alpha$ to weigh their contributions.\nThe fused features are then fed into the decoder stack followed by dense layers to generate character and subword outputs. For multiresolution predictions (Section \\ref{ssec:multiresolution}), we use a common decoder for both character and subword level predictions, followed by a dense output layer for each prediction. This reduces the model parameters and enhances the regularization effect of multitask learning.\n\n\n\\subsection{Cross-modal Attention}\n\\label{ssec:cm_attention}\nScaled dot-product attention operates by constructing three matrices, $K$, $V$ and $Q$ from sequences of inputs.\n$K$ and $V$ may be considered keys and values in a ``soft'' dictionary, while $Q$ is a query that contextualizes the attention weights.\nThe attention mechanism is described in Eq.~\\ref{eq:att}, where $\\sigma$ denotes the softmax operation.\n\\begin{equation}\n Y = \\sigma(KQ^T)V\n\\label{eq:att}\n\\end{equation}\n\nThe case where $K$, $V$ and $Q$ are constructed using the same input sequence consists a self-attention mechanism.\nWe are interested in cross-modal attention, where $K$ and $V$ are constructed using inputs from one modality $\\mathbb{M}_1$, video in our case (Fig.~\\ref{fig:architecture}) and $Q$ using another modality $\\mathbb{M}_2$, audio.\nThis configuration as an effective way to map features from $\\mathbb{M}_1$ to $\\mathbb{M}_2$ \\cite{cross-modal-transformer}.\nNote, that such a configuration is used in the decoder layer of the original transformer architecture \\cite{transformer} where targets are attended based on the encoder outputs.\n\\subsection{Multiresolution training}\n\\label{ssec:multiresolution}\nWe propose the use of a multitask training scheme where the model predicts both character and subword level transcriptions.\nWe jointly optimize the model using the weighted sum of character and subword level loss, as in Eq.~\\ref{eq:loss}:\n\\begin{equation}\n L = \\gamma * L_{subword} + (1 - \\gamma) * L_{character}\n\\label{eq:loss}\n\\end{equation}\n\\noindent where $\\gamma$ is a hyperparameter that controls the importance of each task.\n\nThe intuition for this stems from the reasoning that character and subword level models perform different kinds of mistakes.\nFor character prediction, the model tends to predict words that sound phonetically similar to the ground truths, but are syntactically disjoint with the rest of the sentence.\nSubword prediction, yields more syntactically correct results, but rare words tend to be broken down to more common words that sound similar but are semantically irrelevant.\nFor example, character level prediction may turn ``\\textit{old-fashioned}'' into ``\\textit{old-fashioning}'', while subword level\nturns the sentence ``\\textit{ukuleles are different}'' to ``\\textit{you go release are different}''.\nWhen combining the losses, subword prediction, which shows superior performance is kept as the preliminary output, while the character prediction is used as an auxiliary task for regularization.\n\n\n\n\n\n\\section{Experimental Setup}\nWe conduct our experiments on the How2 instructional videos database \\cite{sanabria18how2}.\nThe dataset consists of $300$ hours of instructional videos from the YouTube platform.\nThese videos depict people showcasing particular skills and have high variation in video\/audio quality, camera angles and duration.\nThe transcriptions are mined from the YouTube subtitles, which contain a mix of automatically generated and human annotated transcriptions.\nAudio is encoded using $40$ mel-filterbank coefficients and $3$ pitch features with a frame size of $10$ ms, yielding $43$-dimensional feature vectors.\nThe final samples are segments of the original videos, obtained using word-level alignment.\nWe follow the video representation of the original paper \\cite{how2-baseline}, where a $3$D ResNeXt-101 architecture, pretrained on action recognition, is used to extract $2048$D features \\cite{hara2018can}.\nVideo features are average pooled over the video frames yielding a single feature vector. For our experiments, we use the train, development and test splits proposed by \\cite{sanabria18how2}, which have sizes $298.2$ hours, $3.2$ hours and $3.7$ hours respectively.\n\nOur model consists of $6$ encoder layers and $4$ decoder layers.\nWe use transformer dimension $480$, intermediate ReLU layer size $1920$ and $0.2$ dropout.\nAll attention layers have $6$ attention heads.\nThe model is trained using Adam optimizer with learning rate $10^{-3}$ and $8000$ warmup steps.\nWe employ label smoothing of $0.1$.\nWe weigh the multitask loss with $\\gamma=0.5$ which gives the best performance.\nA coarse search was performed for tuning all hyperparameters over the development set.\nFor character-level prediction, we extract $41$ graphemes from the transcripts.\nFor subword-level prediction, we train a SentencePiece tokenizer \\cite{kudo2018sentencepiece} over the train set transcriptions using byte-pair encoding and vocabulary size $1200$.\nFor decoding we use beam search with beam size $5$ and length normalization parameter $0.7$. \nWe train models for up to $200$ epochs and the model achieving the best loss is selected using early stopping. Any tuning of the original architecture is performed on the development split.\nNo language model or ensemble decoding is used in the output.\n\\section{Results and Discussion}\n\\begin{table}[tb]\n\t\n\t\\begin{center}\n\t\t\\begin{tabular}{|c c c |}\n\t\t\t\\hline\n\t\t\tInput handling & Recognition level & WER \\\\ [0.5ex]\n\t\t\t\\hline\\hline\n\t\t\tFiltering & Character & $33.0$ \\\\\n\t\t\t\\hline\n\t\t\tFiltering & Subword & $29.7$ \\\\\n\t\t\t\\hline\n\t\t\tChunking & Character & $31.3$ \\\\\n\t\t\t\\hline\n\t\t\tChunking & Subword & $29.9$ \\\\\n\t\t\t\\hline\n\t\t\tStacking & Character & $28.3$ \\\\\n\t\t\t\\hline\n\t\t\tStacking & Subword & $26.1$ \\\\\n\t\t\t\\hline\n\t\t\tStacking & MR & $\\mathbf{21.3}$ \\\\\n\t\t\t\\hline\n\t\t\t\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\caption{Results for different methods of input filtering for different prediction resolutions. \\emph{MR} stands for multiresolution.}\n\t\\label{tab:subword-character}\n\\end{table}\n\nOne of the challenges using scaled dot-product attention is the quadratic increase of layerwise memory complexity as a function of the input sequence length.\nThis issue is particularly prevalent in ASR tasks, with large input sequences. We explore three simple approaches to work around this limitation.\nFirst, we filter out large input sequences ($x>15s$), leading to loss of $100$ hours of data.\nSecond we, chunk the input samples to smaller sequences, using forced-alignment with a conventional DNN-HMM model to find pauses to split the input and the transcriptions.\nFinally, we stack $4$ consecutive input frames into a single feature vector, thus reducing the input length by $4$. Note that this only reshapes the input data as the dimension of our input is increased by the stacking process\n\\footnote{We tried to use the convolutional architecture from \\cite{vggtransformer}, but it failed to converge in our experiments, possibly due to lack of data}.\nResults for the downsampling techniques for character and subword level predictions are summarized in Table~\\ref{tab:subword-character}.\nWe observe that subword-level model performs better than the character level (upto 10\\% relative) in all settings.\nThis can be attributed to the smaller number of decoding steps needed for the subword model, where error accumulation is smaller.\nFurthermore, we see that the naive filtering of large sequences yields to underperforming systems due to the large data loss. Additionally, we see that frame stacking has superior performance to chunking.\nThis is not surprising as splitting the input samples to smaller chunks leads to the loss of contextual information which is preserved with frame stacking.\nWe evaluate the proposed multiresolution training technique with the frame stacking technique, observing a significant improvement(18.3\\%) in the final WER.\nWe thus observe that predicting finer resolutions as an auxiliary task can be used as an effective means of regularization for this sequence to sequence speech recognition task. Furthermore, we have empirically observed that when training in multiple resolutions, models can converge around 50\\% faster than single resolution models.\n\n\\begin{table}[tb]\n\t\\small\n\t\\begin{center}\n\t\t\\begin{tabular}{|c c c c|}\n\t\t\t\\hline\n\t\t\t& & & $\\Uparrow$ \\\\\n\t\t\tFeatures & Level & WER & over audio \\\\\n\t\t\t[0.5ex]\n\t\t\t\\hline\\hline\n\t\t\tAudio & Subword & $26.1$ & -\\\\\n\t\t\t\\hline\n\t\t\tAudio + ResNeXt & Subword & $25.0$ & $3.45\\%$ \\\\\n\t\t\n\t\t\n\t\t\t\\hline\n\t\t\t\\hline\n\t\t\tAudio & MR & $\\mathbf{21.3}$ & - \\\\\n\t\t\t\\hline\n\t\t\tAudio + ResNeXt & MR & $20.5$ & $3.76\\%$ \\\\\n\t\t\n\t\t\n\t\t\t\\hline\n\t\t\t\\hline\n\t\t\tAudio (B) & Subword & $\\mathbf{19.2}$ & - \\\\\n\t\t\t\\hline\n\t\t\tAudio + ResNext (B) & Subword & $\\mathbf{18.4}$ & $3.13\\%$ \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\caption{Comparison of audio only ASR models versus AVASR models with ResNeXt image features. \\emph{MR} stands for multiresolution. \\emph{(B)} shows the results for the LAS model \\cite{how2-baseline}}\n\t\\vspace{-0.6cm}\n\t\\label{tab:multimodal}\n\t\n\\end{table}\nNext, we evaluate relative performance improvement obtained from utilizing the visual features (Table~\\ref{tab:multimodal}).\nWe observe that incorporating visual information improves ASR results. Our AV-ASR system yields gains $>3\\%$ over audio only models for both subword and multiresolution predictions. \nFinally, we observe that while the Listen, Attend and Spell-based architecture of \\cite{how2-baseline} is slightly stronger than the transformer model, the gains from adding visual information is consistent across models. It is important to note that our models are trained end-to-end with both audio and video features.\n\n\n\nAn important question for real-world deployment of multimodal ASR systems is their performance when the visual modality is absent.\nIdeally, a robust system satisfactorily performs when the user's camera is off or in low light conditions.\nWe evaluate our AV-ASR systems in the absence of visual data with the following experiments - a) replace visual feature vectors by zeros b) initialize visual features with gaussian noise with standard deviation $0.2$ c) tweak the value $\\alpha$ to 0 on inference, gating the visual features completely. Table~\\ref{tab:audio-only} shows the results for the different experiments. Results indicate gating visual inputs works better than zeroing them out. Adding a gaussian noise performs best which again indicates the limited availability of data. Overall, in the absence of visual information, without retraining, the AV-ASR model relatively worsens by 6\\% compared to audio only models. \n\n\\begin{table}[tb]\n\n \\begin{center}\n \\begin{tabular}{|c c |}\n \\hline\n Missing input handling & WER \\\\ [0.5ex]\n \\hline\\hline\n Zeros & 23.1 \\\\\n \\hline\n Gaussian Noise $\\sigma$=0.2 & 22.6 \\\\\n \\hline\n Gating visual input $\\alpha$=0 & 22.8 \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\t\\vspace{-0.3cm}\n \\caption{Experimental evaluation of AV-ASR model for handling missing visual input. Here $\\sigma$ denotes the standard deviation of the noise}\n \\label{tab:audio-only}\n\\end{table}\n\n\\section{Conclusions}\nThis paper explores the applicability of the transformer architecture for multimodal grounding in ASR. Our proposed framework uses a crossmodal dot-product attention to map visual features to audio feature space. Audio and visual features are then combined with a scalar additive fusion and used to predict character as well as subword transcriptions. We employ a novel multitask loss that combines the subword level and character losses. Results on the How2 database show that a) multiresolution losses regularizes our model producing significant gains in WER over character level and subword level losses individually b) Adding visual information results in relative gains of 3.76\\% over audio model's results validating our model.\n\nDue to large memory requirements of the attention mechanism, we apply aggressive preprocessing to shorten the input sequences, which may hurt model performance. In the future, we plan to alleviate this by incorporating ideas from sparse transformer variants \\cite{reformer,sparsetransformer}.\nFurthermore, we will experiment with more ellaborate, attention-based fusion mechanisms.\nFinally, we will evaluate the multiresolution loss on larger datasets to analyze it's regularizing effects.\n\\bibliographystyle{acl_natbib}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and notations}\n\nAn interconnection topology can be represented by a graph $G=(V,E)$, where $V$ denotes the processors and $E$ the communication links.\nThe hypercube $Q_n$ is a popular interconnection network because of its structural properties.\\\\\n\\indent The Fibonacci cube was introduced in \\cite{hsu93} as a new interconnection network.\nThis graph is an isometric subgraph of the hypercube which is inspired in the Fibonacci numbers. It has attractive recurrent structures such as\nits decomposition into two subgraphs which are also Fibonacci cubes by themselves. Structural properties of these graphs were more extensively \nstudied afterwards. See \\cite{survey} for a survey. \\\\\n\\indent Lucas cubes, introduced in \\cite{mupe2001}, have attracted the attention as well due to the fact that these cubes are \nclosely related to the Fibonacci cubes. They have also been widely studied\n\\cite{dedo2002, ra2013, ca2011, klmope2011, camo2012, km2012}. \\\\\n\n\\indent We will next define some concepts needed in this paper. \nLet $G$ be a connected graph. The \\emph{open neighbourhood} of a vertex $u$ is $N_G(u)$ the set of vertices adjacent to $u$. The \\emph{closed neighbourhood} of $u$ is $N_G[u]=N_G(u)\\cup\\{u\\}$. The \\emph{distance} between two vertices noted $d_G(x,y)$ is the length of a shortest path between $x$ and $y$. We have thus $N_G[u]=\\{v \\in V(G);d_G(u,v)\\leq1\\}$. We will use the notations $d(x,y)$ and $N[u]$ when the graph is unambiguous.\\\\\nA \\emph{dominating set} $D$ of $G$ is a set of vertices such that every vertex of $G$ belongs to the closed neighbourhood of at least one vertex of $D$. \nIn \\cite{Biggs}, Biggs initiated the study of perfect codes in graphs a generalization of classical 1-error perfect correcting codes. A \\emph{code} $C$ in $G$ is a set of vertices $C$ such that for every pair of distinct vertices $c,c'$ of $C$ we have $N_G[c]\\cap N_G[c']=\\emptyset$ or equivalently such that $d_G(c,c')\\geq3$.\n\nA \\emph{perfect code} of a graph $G$ is both a dominating set and a code. It is thus a set of vertices $C$ such that every vertex of $G$ belongs to the closed neighbourhood of exactly one vertex of $C$. A perfect code is also known as an efficient dominating set. The existence or non-existence of perfect codes have been considered for many graphs. See the introduction of \\cite{aabfk2016} for some references.\n\nThe vertex set of the \\emph{$n$-cube} $Q_n$ is the set $\\mathbb{B}_n$ of binary strings of length $n$, two vertices being adjacent if they differ in precisely one position. Classical 1-error correcting codes and perfect codes are codes and perfect codes in the graph $Q_n$. The \\emph{weight} of a binary string is the number of 1's.\nThe concatenation of strings $\\bm{x}$ and $\\bm{y}$ is denoted $\\bm{x}||\\bm{y}$ or just $\\bm{x}\\bm{y}$ when there is no ambiguity. A string $\\bm{f}$ is a \\emph{substring} of a string $\\bm{s}$ if there exist strings $\\bm{x}$ and $\\bm{y}$, may be empty, such that $\\bm{s}=\\bm{x}\\bm{f}\\bm{y}$.\n\nA {\\em Fibonacci string} of length $n$ is a binary string $\\bm{b}=b_1\\ldots b_n$ with $b_i\\cdot b_{i+1}=0$ for $1\\leq ir_\\text{max}) &= {C}^+_{\\hat{l} mkn}\\ti{X}^+_{\\hat{l} mkn}(r).\n\\end{align}\nThe calculation of ${C}^\\pm_{\\hat{l} mkn}$ is described in Appendix \\ref{app:normC} and \\cite{NasiOsbuEvan19}. As\nnoted previously \\cite{DrasFlanHugh05,FlanHughRuan14}, varying the initial conditions of the source changes\n${C}^\\pm_{\\hat{l} mkn}$ by a phase factor \\cite{DrasFlanHugh05}\n\\begin{equation}\n\\label{eqn:fidToGenClmkn}\n\tC^\\pm_{\\hat{l} mkn}(q_{(\\a)0}) =\n\te^{i\\xi_{mkn}(q_{(\\a)0})}\\hat{C}^\\pm_{\\hat{l} mkn},\n\\end{equation}\nwhere the set of initial conditions $q_{(\\a)0}$ is defined in \\eqref{eqn:initialPhasesTuple}. The phase offset takes the form\n\\begin{multline}\n\t\\xi_{mkn}(q_{(\\a)0})\\equiv m(\\D \\hat{\\vp}(q_{r0},q_{\\th0})-\\vp_0)\n\t\\\\\n\t-\\o_{mkn}(\\D \\hat{t}(q_{r0},q_{\\th0})-t_0)\n\t\t\\\\\n\t\t- k q_{\\th0} - n q_{r0} ,\n\\end{multline}\nand hatted normalization constants $\\hat{C}^\\pm_{\\hat{l} mkn}$ are calculated assuming the fiducial orbit.\nWe provide a derivation of this relationship in Appendix \\ref{app:normC}. Equation \\eqref{eqn:fidToGenClmkn}\nalso holds true for the Teukolsky amplitudes calculated for gravitational perturbations \\cite{FlanHughRuan14}.\n\nWith these normalized homogeneous solutions, we define the following extended homogeneous functions\n\\begin{align}\n\\label{eqn:EHS}\n\t&{\\phi}^\\pm_{lm}(t,r)\\equiv \\sum_{\\hat{l}=|m|}^{+\\infty}\n\t\\sum_{k=-\\infty}^{+\\infty}\\sum_{n=-\\infty}^{+\\infty}\n\t{\\phi}^\\pm_{l\\hat{l} mkn}(r)e^{-i\\o_{mkn}t},\n\t\\\\ \\label{eqn:EHSmodes}\n\t&{\\phi}^\\pm_{l\\hat{l} mkn}(r)\\equiv \\frac{1}{\\varpi}\n\tb^l_{\\hat{l} mkn}{C}^\\pm_{\\hat{l} mkn}\n\t\\ti{X}^\\pm_{\\hat{l} mkn}(r) ,\n\\end{align}\nfor each spherical harmonic $Y_{lm}(\\th,\\vp)$. We refer to ${\\phi}^\\pm_{lm}(t,r)$ as extended homogeneous\nfunctions, not extended homogeneous solutions, since by construction there are no time domain wave equations that\nthey satisfy (the Teukolsky equation does not separate with ordinary spherical harmonics). These functions do\nhowever have the advantage that the sums in \\eqref{eqn:EHS}\nconverge exponentially, unlike those in \\eqref{eqn:fieldDecomposition}. Furthermore, even though the individual\n$l\\hat{l} mkn$ modes of \\eqref{eqn:EHSmodes} are not valid solutions of the inhomogeneous Teukolsky\nequation in the source region $r_\\text{min}\\leq r \\leq r_\\text{max}$, once the full field is reconstructed in the\ntime domain by summing over all modes, we are left with extended homogeneous solutions that provide an accurate and\nconvergent representation of the retarded field up to the location of the point charge\n\\begin{align}\n\\label{eqn:phiRet}\n\t{\\Phi}^\\text{ret}(t,r,\\th,\\vp)&=\n\t{\\Phi}^{-}(t,r,\\th,\\vp)\\Theta\\left(r_p(t)-r\\right)\n \\\\\n &\\qquad + \\hat{\\Phi}^{+}(t,r,\\th,\\vp)\\Theta\\left(r-r_p(t)\\right), \\notag\n \\\\ \\label{eqn:phiPM}\n {\\Phi}^{\\pm}(t,r,\\th,\\vp) &=\n q \\sum_{lm} {\\phi}^\\pm_{lm}(t,r) Y_{lm}(\\th,\\vp).\n\\end{align}\n\n\\subsection{Retarded and singular contributions to the SSF}\n\\label{sec:ssfReg}\n\nUsing \\eqref{eqn:phiRet} and \\eqref{eqn:phiPM}, we construct the $l$ modes of the force ${F}^{\\text{ret},l}_{\\a\\pm}$,\nalong the particle worldline\n\\begin{align}\n\\label{eqn:ssfRetMino}\n\t{F}^{\\text{ret},l}_{\\a\\pm}(\\la) &=\n\t\\sum_{m=-l}^l\n\t(\\mathcal{D}^{lm}_\\a {\\phi}_{lm}^{\\pm})({t}_p,{r}_p)\n\tY_{lm}({\\th}_p,{\\vp}_p) ,\n\\end{align}\nwhere the coordinate positions of the particle are understood to be functions of Mino time [e.g., ${t}_p={t}_p(\\la)$],\nand the operator $\\mathcal{D}^{lm}_\\a$ performs the following operations on the extended homogeneous functions:\n\\begin{align}\n\t\\mathcal{D}_t^{lm}\\phi^\\pm_{lm}&\\equiv\\partial_t\\phi^\\pm_{lm},\n\t\\\\\n\t\\mathcal{D}_r^{lm}\\phi^\\pm_{lm}&\\equiv\\partial_r\\phi^\\pm_{lm},\n\t\\\\ \\label{eqn:delTheta}\n\t\\mathcal{D}_\\th^{lm}\\phi^\\pm_{lm}&\\equiv\\b^{(-3)}_{l+3,m}\\phi^\\pm_{l+3,m}\n\t+\\b^{(-1)}_{l+1,m}\\phi^\\pm_{l+1,m}\n\t\\\\\n\t&\\qquad\n\t+\\b^{(+1)}_{l-1,m}\\phi^\\pm_{l-1,m}\n\t+\\b^{(+3)}_{l-3,m}\\phi^\\pm_{l-3,m}, \\notag\n\t\\\\\n\t\\mathcal{D}_\\vp^{lm}\\phi^\\pm_{lm}&\\equiv im\\phi^\\pm_{lm} .\n\\end{align}\nThe coefficients $\\b_{lm}^{(\\pm i)}$ are defined in Appendix A of \\cite{NasiOsbuEvan19}\\footnote{A minor error exists in\nEq. (A4) of [39]. The coefficients $\\beta^{(\\pm 3)}_{lm}$ are missing a minus sign in front of the\nparentheses on the righthand side of Eq. (A4).} and are\nobtained by first applying the window function proposed by Warburton \\cite{Warb15} and then reprojecting the\nderivatives $\\partial_\\th Y_{lm}$ onto the $Y_{lm}$ basis. Details of this operation can be found in\n\\cite{Warb15}, with a correction added in \\cite{NasiOsbuEvan19}.\n\nThe singular contribution is obtained through a local analytic expansion in the neighborhood of the source worldline\n\\cite{BaraOri00,BaraOri03a,DetwMessWhit03}\n\\begin{equation}\n\\label{eqn:regParameters}\n\tF^{\\text{S},l}_{\\a\\pm} = A^\\pm_{\\a}L + B_\\a\n\t+\\sum_{n=1}^{+\\infty} \\frac{D_{\\a,2n}}{\\prod_{k=1}^n (2L-2k)(2L+2k)},\n\\end{equation}\nwhere $L \\equiv l+1\/2$, and the regularization parameters $A_{\\a\\pm}$, $B_\\a$, and\n$D_{\\a,2n}$ are independent of $l$ but functions of $r_p$, $\\th_p$, $u^r$, $u^\\th$, $\\mathcal{E}$, $\\mathcal{L}_z$,\nand $\\mathcal{Q}$ (as well as $a$ and $M$). Only $A_{\\a\\pm}$ and $B_\\a$ are known analytically for generic bound\norbits in Kerr spacetime \\cite{BaraOri03a}, while $D_{\\a,2}$ is known analytically for equatorial orbits in Kerr\n\\cite{HeffOtteWard14}.\n\nThe terms with higher-order parameters $D_{\\a,2n}$ have the useful property that their $l$-dependent weights vanish\nupon summing over all $l$,\n\\begin{equation}\n\t\\sum_{l=0}^{+\\infty} \\left[\\prod_{k=1}^n (2L-2k)(2L+2k)\\right]^{-1} = 0 .\n\\end{equation}\nOnly $A_{\\a\\pm}$ and $B_\\a$ are needed for convergent results, but if we neglect the $D_{\\a,2n}$ terms upon combining\nEqs.~\\eqref{eqn:modeSumSSF} and \\eqref{eqn:regParameters}, $F_\\a$ converges at a rate $\\sim l^{-2}$. Each\n$D_{\\a,2n}$ term reintroduced to the regularization procedure improves the convergence rate by another factor of\n$l^{-2}$. Since we truncate the sum over $l$ modes around $l_\\text{max}\\sim 20$, we must numerically fit for the\nhigher-order regularization parameters to improve the convergence of the mode-sum regularization. Our fitting\nprocedure is described in \\cite{NasiOsbuEvan19}. The uncertainties associated with this fitting procedure often\ndominate other numerical errors in the calculation. We use this to obtain uncertainty estimates for our SSF\nresults.\n\n\\section{Constructing the SSF for resonant and nonresonant sources}\n\\label{sec:constructSSF}\n\n\\subsection{Nonresonant sources}\n\\label{sec:genSSF}\n\nWith a generic nonresonant orbit, the SSF is multiply periodic and never repeats over the entire interval\n$-\\infty<\\la<\\infty$. Rather than sampling the SSF over this infinite domain in $\\la$, we map the SSF\nto the angle variables introduced in Sec.~\\ref{sec:angleVar},\n\\begin{equation} \\label{eqn:angleVarSSF}\n\t\\hat{F}^{\\text{ret},l}_{\\a\\pm}(q_r,q_\\th) =\n\t\\sum_{m=-l}^l (\\mathcal{D}^{lm}_\\a \\hat{\\phi}_{lm}^{\\pm})(q_r,q_\\th)\n\t\\hat{Y}_{lm}(q_r,q_\\th) .\n\\end{equation}\nNote that we have placed hats on all functions that are evaluated using the fiducial geodesic solutions of\n\\eqref{eqn:geoTFid}-\\eqref{eqn:geoPhiFid}. The evolution of $\\vp$ is dependent on the motion of both $r$ and\n$\\theta$, so in a slight abuse of notation we reparametrize functions to have the following meaning\n\\begin{align}\n\\label{eqn:angleVarYlm}\n\t\\hat{Y}_{lm}(q_r,q_\\th)\\equiv Y_{lm}(\\hat{\\th}_p(q_\\th),\n\t\\D\\hat{\\vp}(q_r,q_\\th)) ,\n\\end{align}\nand\n\\begin{align}\n\t&\\hat{\\phi}^\\pm_{lm}(q_r,q_\\th)\\equiv \\sum_{\\hat{l} kn}\n\t{\\hat{\\phi}^\\pm_{l\\hat{l} mkn}(q_r)}\n\t e^{-i(\\o_{mkn}\\D\\hat{t}(q_r,q_\\th)+kq_\\th+nq_r)}, \\notag\n\t\\\\ \\label{eqn:angleVarEHS}\n\t&\\hat{\\phi}^\\pm_{l\\hat{l} mkn}(q_r)\\equiv \\hat{\\varpi}^{-1}_p(q_r)b^l_{\\hat{l} mkn}\\hat{C}^\\pm_{\\hat{l} mkn}\n\t\\ti{X}^\\pm_{\\hat{l} mkn}(\\hat{r}_p(q_r)) .\n\\end{align}\nHere $\\hat{\\varpi}_p(q_r)=(\\hat{r}_p^2(q_r)+a^2)^{1\/2}$ and the operator $\\mathcal{D}^{lm}_{\\a}$ performs the\nsame function as before. Because the regularization parameters only vary with respect to $r_p$, $\\th_p$, $u^r$,\nand $u^\\th$ (assuming the orbital constants are fixed), the singular field can also be translated into this angle\nvariable parametrization, ultimately providing a description of the SSF in terms of $q_r$ and $q_\\th$\n\\begin{equation}\n\\label{eqn:ssfGenAngleVar}\n\t\\hat{F}_\\a(q_r,q_\\th) = \\sum_{l=0}^\\infty \\left(\n\t\\hat{F}_{\\a\\pm}^{\\text{ret},l}(q_r,q_\\th)\n\t-\\hat{F}^{\\text{S},l}_{\\a\\pm}(q_r,q_\\th)\n\t\\right) .\n\\end{equation}\n\nThe angle variable parametrization maps the entire self-force history onto the finite domain of the invariant\ntwo-torus visualized in Fig.~\\ref{fig:twoTorus}. The SSF, projected onto this torus, can then also be represented\nby the (double) Fourier series\n\\begin{align}\n\\label{eqn:ssfFourier}\n\tF_{\\a}&(q_r,q_\\th)= \\sum_{k=-\\infty}^{+\\infty}\\sum_{n=-\\infty}^{+\\infty}\n\tg^{kn}_\\a e^{-i(kq_\\th+nq_r)},\n\t\\\\\n\tg^{kn}_\\a &= \\frac{1}{4\\pi^2}\\int_0^{2\\pi} dq_r\\int_0^{2\\pi} dq_\\th\\;\n\tF_\\a\\left(q_{r}, q_{\\th}\\right)e^{i(kq_{\\th}+nq_{r})}. \\notag\n\\end{align}\nBy densely sampling values of $q_r$ and $q_\\th$ over the torus at evenly-spaced points $q_{r,i}=2\\pi i\/N_r$ and\n$q_{\\th,j}=2\\pi j\/N_\\th$ (where $N_r, N_\\th \\in \\mathbb{Z}$), we can construct a discrete Fourier representation of\nthe SSF\n\\begin{align}\n\\label{eqn:ssfDiscreteFourier}\n\tF_{\\a}&(q_r,q_\\th)\\simeq \\sum_{k=0}^{N_\\th-1}\\sum_{n=0}^{N_r-1}\n\tf^{kn}_\\a e^{-i(kq_\\th+nq_r)},\n\t\\\\\n\tf^{kn}_\\a &= \\frac{1}{N_r N_\\th}\\sum_{i=0}^{N_\\th-1}\\sum_{j=0}^{N_r-1}\n\tF_\\a\\left(q_{r,i}, q_{\\th,j}\\right)e^{i(kq_{\\th,j}+nq_{r,i})} . \\notag\n\\end{align}\nGiven $N_r$ and $N_\\th$ that are large enough such that $\\text{max}|f^{kn}_\\a-g^{kn}_\\a|<\\eps_\\text{FS}$, where\n$\\eps_\\text{FS}$ is some predefined accuracy goal, the discrete representation will provide an accurate\napproximation of Eq.~\\eqref{eqn:ssfFourier} \\cite{HoppETC15,NasiOsbuEvan19}. We found that sample numbers of\n$N_r=N_\\th=2^8$ were typically sufficient for constructing a discrete representation that was accurate to about\n$\\eps_\\text{FS}\\sim 10^{-8}-10^{-10}$. The discrete Fourier series provides an efficient method for storing and\ninterpolating SSF data.\n\nWe can easily generalize our results to geodesics with arbitrary initial conditions by applying the following\nshifting relation,\n\\begin{equation}\n\\label{eqn:ssfFid2Gen}\n\tF_\\a(q_r,q_\\th;q_{a0})\n\t= \\hat{F}_\\a(q_r+q_{r 0},q_\\th+q_{\\th 0}).\n\\end{equation}\nA proof of Eq.~\\eqref{eqn:ssfFid2Gen}, which applies for both the SSF and GSF, is provided in Appendix\n\\ref{app:ssfInvariant}. While this result seems almost trivial for the nonresonant case, it surprisingly plays a\nrole in improving the efficiency of SSF calculations for resonant orbits as well, as discussed in\nSec.~\\ref{sec:resSSF}.\n\n\n\\subsection{Resonant sources}\n\\label{sec:resSSF}\n\nThe SSF experienced by a charge following an $r\\th$-resonant geodesic requires a different treatment. The worldline\nof the charge is described by \\eqref{eqn:geoTRes}-\\eqref{eqn:geoPhiRes}. In contrast to the SSF for a nonresonant\norbit [e.g., $\\hat{F}_\\a(q_r,q_\\th)$], we construct the resonant-SSF $\\bar{F}_\\a$ to be a function of the single\nresonant angle variable $\\bar{q}$ and the initial resonant phase $\\bar{q}_0$ [defined in Eq.~\\eqref{eqn:resAngleVar}].\nWe describe here two methods of calculating $\\bar{F}^\\mathrm{res}_\\a(\\bar{q},\\bar{q}_0)$: the first uses the reduced\nmode spectrum $\\o_{mN}$ defined in \\eqref{eqn:resFreq} to construct the SSF on an $l\\hat{l} mN$ basis, while the second\nuses the generic mode spectrum $\\o_{mkn}$ to construct the SSF on the $l\\hat{l} mkn$ basis, just as we outlined in the\nprevious section for nonresonant orbits. These two approaches are similar to the two approaches for calculating\ngravitational wave fluxes discussed in \\cite{FlanHughRuan14}.\n\n\\subsubsection{Constructing the resonant SSF on an $l\\hat{l} m N$ basis}\n\nThe retarded SSF sourced by an $r\\th$ resonant geodesic, when parametrized in terms of the resonant angle variable\nand resonant phase, takes the form\n\\begin{equation}\n\\label{eqn:fRetRes}\n\t\\bar{F}^{\\text{ret},l}_{\\a\\pm}(\\bar{q};\\bar{q}_0) =\n\t\\sum_{m=-l}^l\n\t(\\mathcal{D}^{lm}_\\a \\bar{\\phi}_{p,lm}^{\\pm})(\\bar{q};\\bar{q}_0)\\bar{Y}_{lm}(\\bar{q};\\bar{q}_0),\n\\end{equation}\nwhere, in contrast to Eqs.~\\eqref{eqn:angleVarYlm} and \\eqref{eqn:angleVarEHS},\n\\begin{multline}\n\t\\bar{Y}_{lm}(\\bar{q};\\bar{q}_0)\\equiv\n\tY_{lm}(\\bar{\\th}_p(\\bar{q};\\bar{q}_0),\\D\\bar{\\vp}(\\bar{q};\\bar{q}_0)-\\D\\bar{\\vp}(0;\\bar{q}_0)),\n\\end{multline}\nand\n\\begin{align}\n\\label{eqn:resEHS}\n\t&\\bar{\\phi}^\\pm_{lm}(\\bar{q};\\bar{q}_0)\\equiv \\sum_{\\hat{l}=|m|}^{+\\infty} \\sum_{N=-\\infty}^{+\\infty}\n\t{\\bar{\\phi}^\\pm_{l\\hat{l} mN}(\\bar{q};\\bar{q}_0)}\n\t\\\\\n\t&\\qquad \\qquad \\qquad\n\t \\times e^{-i\\o_{mN}(\\D\\bar{t}(\\bar{q};\\bar{q}_0)-\\D\\bar{t}(0;\\bar{q}_0)+N\\bar{q})}, \\notag\n\t\\\\\n\t&\\bar{\\phi}^\\pm_{l\\hat{l} mN}(\\bar{q};\\bar{q}_0)\\equiv\n\t\\bar{\\varpi}^{-1}_p(\\bar{q})b^l_{\\hat{l} mN}\\bar{C}^\\pm_{\\hat{l} mN}(\\bar{q}_0)\n\t\\ti{X}^\\pm_{\\hat{l} mN}(\\bar{r}_p(\\bar{q})). \\notag\n\\end{align}\nAll functions and coefficients with an overbar are evaluated using the resonant geodesic solutions described by\n\\eqref{eqn:geoTRes}-\\eqref{eqn:geoPhiRes}. The $\\bar{C}^\\pm_{\\hat{l} mN}$ are defined in Appendix \\ref{app:normC} and\nvary with the resonant phase parameter $\\bar{q}_0$. Unlike $C^\\pm_{\\hat{l} mkn}(\\bar{q}_0)$,\n$\\bar{C}^\\pm_{\\hat{l} mN}(\\bar{q}_0)$ is not related to the fiducial case $\\bar{C}^\\pm_{\\hat{l} mN}(0)$ by a simple phase\nfactor. Each time we calculate the SSF for a new value of $\\bar{q}_0$, the source term must be integrated over a\nnew resonant orbit. Since source integration is a computationally-intensive aspect of the SSF calculation, needing\nto repeat this operation is not ideal. Thus, the advantage of reduced dimensionality in the mode spectrum must be\nweighed against the disadvantage of repeated source integration.\n\n\\subsubsection{Constructing the resonant SSF on an $l\\hat{l} m k n$ basis}\n\nAlternatively, we first construct the fiducial SSF $\\hat{F}_\\a(q_r,q_\\th)$ using the methods outlined in\nSec.~\\ref{sec:genSSF}. Combining \\eqref{eqn:ssfGenAngleVar} and \\eqref{eqn:ssfFid2Gen}, we can then relate the\nresonant SSF $\\bar{F}^\\mathrm{res}_\\a(\\bar{q};\\bar{q}_0)$ to the fiducial result by fixing the relationship between\n$q_r$ and $q_\\theta$\n\\begin{equation} \\label{eqn:ssfqq0}\n\t\\bar{F}^\\mathrm{res}_{\\a}\n\t(\\bar{q};\\bar{q}_0) =\n\t\\hat{F}_{\\a}\n\t(\\b_r \\bar{q}, \\b_\\th \\bar{q} + \\b_\\th \\bar{q}_0) .\n\\end{equation}\nIn this way, we simply construct the fiducial SSF $\\hat{F}_{\\a}(q_r,q_\\th)$ on an $l\\hat{l} m k n$ basis by relating\nthe $\\hat{l} mN$-mode functions and constants to their $\\hat{l} mkn$-mode counterparts\n\\begin{gather}\n\\label{eqn:freqMKNfreqMN}\n\\o_{mN}=\\o_{m(kn)_N} ,\n\\\\\n\\ti{X}_{\\hat{l} mN}=\\ti{X}_{\\hat{l} m(kn)_N}, \\qquad b^l_{\\hat{l} mN}=b^l_{\\hat{l} m(kn)_N} ,\n\\end{gather}\nwhere one must be careful to understand that $(k,n)_N$ represents the set of all $k$ and $n$ values that produce\nthe same value $N$ that satisfies $N=k\\b_\\th+n\\b_r$. Significant computational time is saved by recycling values\nof the homogeneous radial functions for different values of $k$ and $n$, provided\nthey share the same frequency and spheroidal mode numbers $(\\hat{l}, m)$.\n\nThe normalization coefficients are related by a coherent sum over all $k$ and $n$ modes that share the same frequency\n(given by $N$)\n\\begin{align}\n\\label{eqn:ClmN2Clmkn}\n\t\\bar{C}^\\pm_{\\hat{l} mN}(\\bar{q}_0)&= \\sum_{(k,n)_N}\n\te^{i\\ti{\\xi}_{mkn}(\\bar{q}_0)}\\hat{C}^\\pm_{\\hat{l} mkn} ,\n\\end{align}\nas demonstrated in Appendix A of \\cite{FlanHughRuan14} and Sec.~III D of \\cite{GrosLeviPere13}. In this way,\neach $\\bar{C}^\\pm_{\\hat{l} mN}(\\bar{q}_0)$ is a superposition of many amplitudes $\\hat{C}^\\pm_{\\hat{l} mkn}$ that\nwould have been regarded as independent in the nonresonant case. In a complex square, this superposition leads to\nconstructive or destructive interference terms in the fluxes. Note that\n$\\ti{\\xi}_{mkn}(\\bar{q}_0) \\equiv{\\xi}_{mkn}(0,0,\\b_\\th q_{\\th 0},0)$. Substituting\nEqs.~\\eqref{eqn:freqMKNfreqMN}-\\eqref{eqn:ClmN2Clmkn} into Eqs.~\\eqref{eqn:fRetRes}-\\eqref{eqn:resEHS}, brings them\ninto the same form as Eqs.~\\eqref{eqn:angleVarSSF}-\\eqref{eqn:angleVarEHS}. Unlike $\\ti{X}_{\\hat{l} mkn}$, each\n$C_{\\hat{l} mkn}$ must be calculated independently, even if they share the same frequencies and spheroidal harmonic\nmode numbers. Essentially, by introducing the more generic mode spectrum $\\o_{mkn}$, we circumvent the need to\nrepeatedly evaluate each $lmN$ mode at different initial phases, but at the expense of summing over an additional\nmode number. The advantage of this approach is that, once a code has already been built to calculate the fiducial\nSSF for nonresonant orbits, it can be easily modified to produce the SSF for resonant sources and avoids the need\nto construct an entirely separate code.\n\n\\subsubsection{Discrete Fourier representation of the resonant SSF}\n\nThe resonant SSF is periodic with respect to $\\bar{q}$ and $\\bar{q}_0$, and therefore can be expressed as a multiple\nFourier series. By sampling the resonant SSF on an evenly spaced two-dimensional grid in $\\bar{q}$ and $\\bar{q}_0$,\nthe discrete Fourier representation of $\\bar{F}^\\mathrm{res}_\\a$ is\n\\begin{align}\n\\label{eqn:FresFourier}\n\t&\\bar{F}^\\text{res}_\\a(\\bar{q};\\bar{q}_0) \\simeq \\sum_{K=0}^{N_0-1}\\sum_{N=0}^{N_\\text{res}-1}\n\t\\bar{g}_\\a^{KN} e^{-iN\\bar{q}}e^{-iK \\bar{q}_0},\n\t\\\\\n\t&\\bar{g}_\\a^{KN} = \\frac{1}{N_0 N_\\text{res}}\n\t\\sum_{\\jmath=0}^{N_0-1}\\sum_{\\imath=0}^{N_\\text{res}-1}\n\t\\bar{F}^\\text{res}_\\a(\\bar{q}_\\imath;\\bar{q}_{0\\jmath}) e^{iN\\bar{q}_\\imath}e^{iK \\bar{q}_{0\\jmath}} ,\n\\end{align}\nwhere $\\bar{q}_{i}=2\\pi i\/N_\\mathrm{res}$ and $\\bar{q}_{0,j}=2\\pi j\/N_0$, with $N_\\mathrm{res}, N_0 \\in \\mathbb{Z}$.\nBy comparing \\eqref{eqn:FresFourier} with \\eqref{eqn:ssfDiscreteFourier} and \\eqref{eqn:ssfqq0}, we can relate\n$\\hat{f}_\\a^{kn}$ and $\\bar{g}_\\a^{KN}$ by\n\\begin{equation}\n\t\\bar{g}_\\a^{KN} = \\hat{f}_\\a^{K\/\\b_\\th,(N-K)\/\\b_r} .\n\\end{equation}\nFrom this relation, we see that $\\bar{g}_\\a^{KN}=0$ unless $K$ is a multiple of $\\b_\\th$ and $N-K$ is a multiple of\n$\\b_r$. Thus, while the resonant angle variable and the initial resonant phase more naturally capture both the coupled nature of\nthe radial and polar motion and the sensitivity of the source to initial conditions, this parametrization is less\nefficient at capturing the behavior of the self-force. For example, if one wants to calculate $\\hat{f}^{kn}_\\a$\nfor $0\\leq k < N_\\th$, $0\\leq n < N_r$, then one would need to sample $N_r \\times N_\\th$ points in the\n$q_r$-$q_\\th$ domain, but $\\b_rN_r \\times (\\b_\\th N_\\th+\\b_r N_r)$ points in the $\\bar{q}$-$\\bar{q}_0$ domain. This\noversampling occurs because the resonant parametrization does not take full advantage of the symmetries of the\norbit, which are better captured by the separation of the radial and polar motion in the $q_r$-$q_\\th$ angle\nparametrization.\n\n\\subsection{Dissipative and conservative SSF}\n\nIrrespective of the type of orbit, the self-force can be decomposed into conservative and dissipative parts,\n$F_\\a^\\text{cons}$ and $F_\\a^\\text{diss}$. These parts impact the evolution of EMRIs in different ways\n\\cite{Bara09,DiazETC04,Mino03,HindFlan08} and computationally converge at different rates in the mode-sum\nregularization procedure. The dissipative part $F_\\a^\\text{diss}$ does not require regularization and converges\nexponentially. The conservative part $F_\\a^\\text{cons}$ requires regularization and converges as a power law in\nthe number of $l$ modes.\n\nSummarizing our previous discussion \\cite{NasiOsbuEvan19} of this decomposition, the split depends on both the\nretarded force and the advanced force $F^\\text{adv}_\\a$, which depends on the advanced scalar field solution.\nThe decomposition is made in terms of spherical harmonic elements, e.g., $F^{\\text{adv},l}_\\a$ and is given by\n\\begin{align}\n\tF^\\text{diss}_{\\a} &=\\sum_{l=0}^{+\\infty}\\frac{1}{2}\n\t\\left( F^{\\text{ret},l}_{\\a\\pm}-\n\tF^{\\text{adv},l}_{\\a\\pm}\\right),\n\t\\\\ \\label{eqn:consSSF}\n\tF^\\text{cons}_{\\a} &=\\sum_{l=0}^{+\\infty}\\left\\{\\frac{1}{2}\n\t\\left( F^{\\text{ret},l}_{\\a\\pm}+\n\tF^{\\text{adv},l}_{\\a\\pm}\\right)-F^{\\text{S},l}_{\\a\\pm} \\right\\}.\n\\end{align}\nThe inconvenience of calculating the advanced scalar field solution is avoided by using symmetries of Kerr\ngeodesics \\cite{Mino03,HindFlan08,Bara09} (summarized also in \\cite{NasiOsbuEvan19}), which lead to convenient\nrelationships between spacetime components of $F^{\\text{ret},l}_\\a$ and $F^{\\text{adv},l}_\\a$,\n\\begin{equation}\n\\label{eqn:ssfRet2Adv}\n\tF^{\\text{adv},l}_\\a(q_r,q_\\th) = \\eps_{(\\a)}\n\tF^{\\text{ret},l}_\\a(2\\pi-q_r,2\\pi-q_\\th) ,\n\\end{equation}\nwhere $\\eps_{(\\a)}=(-1,1,1,-1)$. Thus, $F_t^\\text{diss}$, $F_r^\\text{cons}$, $F_\\th^\\text{cons}$, and\n$F_\\vp^\\text{diss}$ are symmetric (even) functions on the $q_r-q_\\th$ two-torus, while $F_t^\\text{cons}$,\n$F_r^\\text{diss}$, $F_\\th^\\text{diss}$, and $F_\\vp^\\text{cons}$ are antisymmetric (odd). These relationships\nbetween advanced and retarded solutions have been previously discussed \\cite{WarbBara11,Warb15,ThorWard17} in the\ncontext of restricted orbits but, in fact, Eq.~\\eqref{eqn:ssfRet2Adv} holds for arbitrary geodesic motion.\n\n{\\renewcommand{\\arraystretch}{1.25}\n\\begin{table}[t!]\n\t\\caption{Summary of the resonant orbits considered in Sec.~\\ref{sec:results}. In all cases the primary\n\tspin is $a=0.9$ (with $M=1$). The real number values are truncated in the table to four significant figures for brevity.}\n\t\\label{tab:orbits}\n\t\\centering\n\t\\begin{tabular*}{\\columnwidth}{c @{\\extracolsep{\\fill}} c c c c c}\n\t\t\\hline\n\t\t\\hline\n\t\t\\multicolumn{2}{c }{\\quad Model} & $p$ & $e$ & $x_\\text{inc}$ & $\\b_r$:$\\b_\\th$\n\t\t\\\\\n\t\t\\hline\n\t\t\\multicolumn{2}{c}{\\quad $e02.13$} & 3.622 & 0.2 & $\\cos(\\pi\/4)$ & 1:3\n\t\t\\\\\n\t\t\\multicolumn{2}{c}{\\quad $e02.12$} & 4.508 & 0.2 & $\\cos(\\pi\/4)$ & 1:2\n\t\t\\\\\n\t\t\\multicolumn{2}{c}{\\quad $e02.23$} & 6.643 & 0.2 & $\\cos(\\pi\/4)$ & 2:3\n\t\t\\\\\n\t\t\\multicolumn{2}{c}{\\quad $e05.13$} & 3.804 & 0.5 & $\\cos(\\pi\/4)$ & 1:3\n\t\t\\\\\n\t\t\\multicolumn{2}{c}{\\quad $e05.12$} & 4.607 & 0.5 & $\\cos(\\pi\/4)$ & 1:2\n\t\t\\\\\n\t\t\\multicolumn{2}{c}{\\quad $e05.23$} & 6.707 & 0.5 & $\\cos(\\pi\/4)$ & 2:3\n\t\t\\\\\n\t\t\\hline\n\t\t\\hline\n\t\\end{tabular*}\n\\end{table}\n}\n\n\\section{Resonant SSF results}\n\\label{sec:results}\n\nUsing the methods outlined in the prior sections, we generated new results for the SSF on six different\nresonant orbits, the orbital parameters of which are listed in Table \\ref{tab:orbits}. These calculations were made\nwith a \\textsc{Mathematica} code first described in \\cite{NasiOsbuEvan19}. These calculations also made use of\nsoftware from the Black Hole Perturbation Toolkit \\cite{BHPTK18}, specifically the \\textsc{KerrGeodesics} and\n\\textsc{SpinWeightedSpheroidalHarmonics} packages.\n\nIn generating numerical results we set $M=1$, which is assumed for the remainder of this work. Each resonant orbit\nhad primary spin $a=0.9$. We focused on 1:3, 1:2, and 2:3 $r\\th$ resonances, the three resonances an EMRI is most\nlikely to encounter during its final years of evolution when its signal falls within the LISA passband\n\\cite{RuanHugh14,BerrETC16}. To pick orbital parameters $(p,e,x)$ that produce $r\\th$-resonant frequencies, we\nfollow the approach of Brink, Geyer, and Hinderer \\cite{BrinGeyeHind15a,BrinGeyeHind15b}. Specified values of $e$\nand $x$ are chosen first, and then $p$ is numerically calculated using the root-finding method described in Sec.~V$\\,$E\nof \\cite{BrinGeyeHind15b}. In our work all of the orbits share the same inclination, $x=\\cos(\\pi\/4)$, while two\ndifferent eccentricities, $e=0.2$ and $e=0.5$, are considered. The resulting values of $p$ (to four places) for\neach resonant orbit are listed in Table \\ref{tab:orbits}.\n\n\\begin{figure*}[th!]\n\t\t\\includegraphics[width=0.7\\textwidth]{Figures\/ssf_converge_save.pdf}\n\t\t\\caption{Convergence of the SSF $l$ modes for resonant models listed in Table \\ref{tab:orbits}.\n\t\tThe dashed and dotted lines depict comparative power-law rates of\n\t\tconvergence for $\\bar{F}^\\text{res}_\\vp(\\bar{q}=5\\pi\/16;\\bar{q}_0=5\\pi\/32\/\\b_\\th)$ as more\n\t\tregularization terms are incorporated. The (black) squares represent individual $l$ modes\n\t\tof the unregularized SSF, the sum of which clearly diverges. The (red) triangles are the residuals\n\t\tfrom subtracting $A_\\vp$ and $B_\\vp$. The (blue) diamonds represent the residuals after subtracting\n\t\t$D_{\\vp,2}$, obtained through numerical fitting. The (purple) circles represent the inclusion of\n\t\t$D_{\\vp,4}$, also approximated via a numerical fit.}\n\t\t\\label{fig:convergenceFph}\n\\end{figure*}\n\nAs discussed in Sec.~\\ref{sec:constructSSF}, for resonant orbits we express the SSF as a function of the\nresonant angle variable $\\bar{q}$ and the resonant phase parameter $\\bar{q}_0$, i.e.,\n$\\bar{F}^\\text{res}_\\a(\\bar{q};\\bar{q}_0)$, or (as convenient) as a function of the more general angle\nvariables $q_r$ and $q_\\th$ and the initial phases $q_{r0}$ and $q_{\\th 0}$, i.e.,\n$\\hat{F}_\\a(\\b_r \\bar{q},\\b_\\th \\bar{q}+\\b_\\th \\bar{q}_{0})=\\hat{F}_\\a(q_r,q_\\th+q_{\\th 0})$. Plotting the SSF as\na function of $\\bar{q}$, as shown in Sec.~\\ref{sec:FresQQ0}, highlights the periodicity of the SSF during resonances\nand is qualitatively representative of the Mino or coordinate time dependence of the SSF. On the other hand,\nplotting the SSF as a function of $q_r$ and $q_\\th$, as shown in Sec.~\\ref{sec:FresQrQth}, separates the dependence\nof the SSF on the radial and polar motion of the orbit. This way of depicting the SSF mirrors the parametrizations\nused for nonresonant orbits, as seen in \\cite{Vand18,NasiOsbuEvan19}. To better analyze the impact of different\norbital parameters and types of resonances, we present each spacetime component of the self-force separately.\n\n\\subsection{Regularization and convergence of results}\n\\label{sec:regAndConvergeRes}\n\nThe SSF is constructed by mode-sum regularization and the numerical fitting procedures discussed in\nSec.~\\ref{sec:ssfReg}. The convergence of the mode-sum regularization procedure is well understood: subtracting\nthe analytically known regularization parameters, $A_\\a$ and $B_\\a$, produces residuals that fall off as\n$\\sim l^{-2}$ for large $l$. There is no fundamental difference when an orbit is on resonance. In\nFig.~\\ref{fig:convergenceFph} we plot the mode-sum convergence of $\\bar{F}^\\text{res}_\\vp$ at the point\n$(\\bar{q}=5\\pi\/16,\\b_\\th \\bar{q}_0=5\\pi\/32)$ for all six resonant configurations. Points refer to the $l$-mode\nresiduals that result from subtracting the analytically known and numerically fitted regularization parameters, while\nthe lines depict expected power-law convergence rates for large $l$. In each resonance that we consider, the\nresiduals approach their expected asymptotic rates of convergence.\n\nWhile all of the models have the same asymptotic behavior at large $l$, Fig.~\\ref{fig:convergenceFph} demonstrates\nthat for low $l$ modes the $e=0.2$ sources converge faster than those with $e=0.5$, the 2:3 resonances converge\nfaster than the 1:2 resonances, and the 1:2 resonances converge faster than the 1:3 ones. Higher eccentricities\nrequire a broader frequency spectrum to capture the radial motion. Additionally, sources that orbit farther into\nthe strong field excite larger perturbations and require higher frequency modes to capture the behavior of the\nself-force. The 1:3 resonances have the smallest pericentric separations, the 1:2 resonances have the next smallest,\nand the 2:3 resonances have the largest, which is reflected in varying rates of convergence at low $l$.\n\nGiven these factors, the $e05.13$ orbit presents the greatest challenge. For this model it takes thousands of\nadditional modes to capture the behavior of the SSF compared to other resonant configurations. Because of the slow\nconvergence at low multipoles, truncating mode summations at the same value of $l_\\text{max}$ as the other orbits\nwill introduce larger numerical errors in the retarded SSF contributions. While these numerical errors are still\nrelatively small, they are significant enough that they make it much more difficult to fit for higher-order\nregularization parameters. The accuracy of the conservative component of the SSF suffers because of this. In\nconsequence, the conservative SSF is only known to $\\gtrsim 2$ digits of accuracy for the $e05.13$ orbit, with the\nnumerical error greatest when a component of the SSF is in the vicinity of passing through zero. Fortunately, the\ndissipative component typically dominates over the conservative contribution in regions of the orbit where the\nconservative contribution is known less accurately.\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.95\\textwidth]{Figures\/ssf_r_var_rmin_save}\n\t\\caption{Radial component of the SSF as a function of the resonant angle variable $\\bar{q}$,\n\ti.e., $\\bar{F}^\\text{res}_r(\\bar{q};\\bar{q}_0)$, for the six resonant geodesics listed in Table\n\t\\ref{tab:orbits}. The SSF is weighted by the cube of the pericentric radius, $r_\\text{min}^3$, so that\n\tall six orbits are of comparable magnitude. The dot-dashed (black) line represents the SSF for a\n\tresonant geodesic with an initial resonant phase of $\\b_\\th \\bar{q}_0=q_{\\th 0}=0$, while the solid (red)\n\tline represents the SSF for a resonance with the same orbital parameters but an initial resonant phase\n\tof $\\b_\\th \\bar{q}_0=q_{\\th 0}=-\\pi\/2$. The shaded grey region represents all of the SSF values produced\n\tby varying the initial phase parameter $\\bar{q}_0$ from $0$ to $2\\pi$.}\n\\label{fig:ssfRVar}\n\\end{figure*}\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.95\\textwidth]{Figures\/ssf_t_var_rmin_save}\n\t\\caption{Time component of the SSF as a function of the resonant\n\tangle variable $\\bar{q}$, i.e., $\\bar{F}^\\text{res}_t(\\bar{q};\\bar{q}_0)$,\n\tfor the six resonant geodesics listed in Table \\ref{tab:orbits}. The dot-dashed\n\t(black) line represents the SSF for a resonant geodesic with an initial resonant\n\tphase of $\\b_\\th \\bar{q}_0=q_{\\th 0}=0$, the solid (red) line represents an initial\n\tresonant phase of $\\b_\\th \\bar{q}_0=q_{\\th 0}=-\\pi\/2$, and the shaded grey region\n\trepresents all of the SSF values produced by varying $\\bar{q}_0$ from $0$ to $2\\pi$.}\n\t\\label{fig:ssfTVar}\n\\end{figure*}\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.95\\textwidth]{Figures\/ssf_th_var_rmin_save}\n\t\\caption{Polar component of the SSF as a function of the resonant angle\n\tvariable $\\bar{q}$, i.e., $\\bar{F}^\\text{res}_\\th(\\bar{q};\\bar{q}_0)$, for\n\tthe six resonant geodesics listed in Table \\ref{tab:orbits}. The dot-dashed\n\t(black) line represents the SSF for a resonant geodesic with an initial resonant\n\tphase of $\\b_\\th \\bar{q}_0=q_{\\th 0}=0$, the solid (red) line represents an initial\n\tresonant phase of $\\b_\\th \\bar{q}_0=q_{\\th 0}=-\\pi\/2$, and the shaded grey region\n\trepresents all of the SSF values produced by varying $\\bar{q}_0$ from $0$ to $2\\pi$.}\n\t\\label{fig:ssfThVar}\n\\end{figure*}\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.95\\textwidth]{Figures\/ssf_ph_var_rmin_save}\n\t\\caption{Azimuthal component of the SSF as a function of the resonant angle variable\n\t$\\bar{q}$, i.e., $\\bar{F}^\\text{res}_\\vp(\\bar{q};\\bar{q}_0)$, for the six resonant geodesics\n\tlisted in Table \\ref{tab:orbits}. The dot-dashed (black) line represents the SSF for a\n\tresonant geodesic with an initial resonant phase of $\\b_\\th \\bar{q}_0=q_{\\th 0}=0$, the\n\tsolid (red) line represents an initial resonant phase of $\\b_\\th \\bar{q}_0=q_{\\th 0}=-\\pi\/2$,\n\tand the shaded grey region represents all of the SSF values produced by varying $\\bar{q}_0$\n\tfrom $0$ to $2\\pi$.}\n\t\\label{fig:ssfPhVar}\n\\end{figure*}\n\n\\subsection{Scalar self-force as a function of $\\bar{q}$}\n\\label{sec:FresQQ0}\n\nFor a resonant orbit we can present the SSF in a simple line plot as a function of the net angle variable\n$\\bar{q}$, as depicted in Figs.~\\ref{fig:ssfRVar}, \\ref{fig:ssfTVar}, \\ref{fig:ssfThVar}, and \\ref{fig:ssfPhVar}.\nIn these plots the SSF has been weighted by the cube of the pericentric radius of the orbit (i.e.,\n$r_\\text{min}^3\\,\\bar{F}^\\text{res}_\\a$), which more tightly bounds the variations in the SSF and facilitates\ncomparisons across different models. Each plot shows the SSF variation with $\\bar{q}$ for two different initial\nconditions (i.e., values of $\\bar{q}_0$). The dot-dashed (black) curves show the SSF when the initial polar phase\nis $\\b_\\th \\bar{q}_0=q_{\\th 0}=0$ (i.e., initial conditions $x^\\mu_p(\\la=0)=(0,r_\\text{min},\\th_\\text{min},0)$ and\n$u^r(0)=u^\\th(0)=0$), while the solid (red) curves show the SSF when $\\b_\\th \\bar{q}_0=q_{\\th 0}=-\\pi\/2$ (i.e., initial\nconditions $x^\\mu_p(\\la=0)=(0,r_\\text{min},\\pi\/2,0)$, $u^r(0)=0$, and $u^\\th(0)<0$).\\footnote{Note that $q_{\\th 0}$\nis held constant rather than $\\bar{q}_0$, because the same value of $\\bar{q}_0$ will generate different initial\nconditions for resonances with different values of $\\b_\\th$.} The shaded grey regions depict the range of SSF\nvalues that result from varying the initial phases---either $q_{\\th 0}$ or $\\bar{q}_0$---through their entire range.\n\nThe SSF is, of course, periodic with respect to $\\bar{q}$, but interestingly for the 2:3 resonances\n$\\bar{F}^\\text{res}_t$, $\\bar{F}^\\text{res}_r$, and $\\bar{F}^\\text{res}_\\vp$ are additionally periodic on the half\ninterval $[0,\\pi]$. This behavior arises in the Kerr background because the time, radial, and azimuthal components\nof the SSF are invariant under parity transformations (i.e., reflections $\\th_p \\rightarrow \\pi-\\th_p$), while the\npolar component flips sign \\cite{Warb15} (equally true of the gravitational self-force \\cite{Vand18}). For a\n2:3 resonance, the radial motion of the orbit is identical on the intervals $[0,\\pi]$ and $[\\pi,2\\pi]$, while the\npolar motion is related by the parity transformation. From this fact follows the repetition in $\\bar{F}^\\text{res}_t$,\n$\\bar{F}^\\text{res}_r$, and $\\bar{F}^\\text{res}_\\vp$, while also giving the reflection behavior\n$\\bar{F}^\\text{res}_\\th(\\bar{q};\\bar{q}_0)=-\\bar{F}^\\text{res}_\\th(\\bar{q}+\\pi;\\bar{q}_0)$.\n\nThese symmetries in the geodesic motion also manifest themselves in the number of low-frequency oscillations\nthat appear in the SSF components, particularly in the low-eccentricity orbits. Focusing on $\\bar{F}^\\text{res}_r$\nin Fig.~\\ref{fig:ssfRVar}, the SSF locally peaks 6 times for the $e02.13$ and $e02.23$ models and 4 times in\nthe $e02.12$ case. The peaks closely align with the epochs at which each orbit passes through its polar extrema.\nA similar behavior is also seen for $\\bar{F}^\\text{res}_t$, $\\bar{F}^\\text{res}_\\vp$, and the higher eccentricity\nmodels, though for $e=0.5$ it is more difficult to identify local peaks, particularly as the orbit approaches\napocenter. For $\\bar{F}^\\text{res}_\\th$ in Fig.~\\ref{fig:ssfThVar}, the peaks align with the passage of the source\nthrough $\\th_\\text{min}$, while the troughs align with its passages through $\\pi-\\th_\\text{min}$.\n\nThe degree to which the SSF varies with respect to changes in initial phase depends primarily on which component\nof the SSF vector we consider. The time component, $\\bar{F}^\\text{res}_t$ (Fig.~\\ref{fig:ssfTVar}), displays the\nleast effect of varying the initial conditions. The azimuthal component, $\\bar{F}^\\text{res}_\\vp$\n(Fig.~\\ref{fig:ssfPhVar}), shows slightly greater variations with respect to initial conditions. The radial\ncomponent, $\\bar{F}^\\text{res}_r$ (Fig.~\\ref{fig:ssfRVar}), is still more affected. Finally, the polar angular\ncomponent, $\\bar{F}^\\text{res}_\\th$ (Fig.~\\ref{fig:ssfThVar}), displays the most significant variations. To\nunderstand these variations, recall that the radial and polar position of the resonant source, $\\bar{r}_p$\nand $\\bar{\\th}_p$, depend on the angle variables according to\n\\begin{align}\n\t\\bar{r}_p&=\\hat{r}_p(q_r)=\\hat{r}(\\b_r \\bar{q}), \\\\\n\t\\bar{\\th}_p&=\\hat{\\th}_p(q_\\th+q_{\\th 0})=\\hat{\\th}(\\b_\\th \\bar{q}+\\b_\\th \\bar{q}_0).\n\\end{align}\nConsequently, a broader grey band indicates a stronger dependence on the polar motion. Thus, $\\bar{F}^\\text{res}_t$\nprimarily depends on the radial motion of the source, while $\\bar{F}^\\text{res}_r$ is sensitive to both polar and\nradial motions. In behavior opposite of $\\bar{F}^\\text{res}_t$, $\\bar{F}^\\text{res}_\\th$ is primarily dependent on\nthe polar motion of the orbit. Finally, $\\bar{F}^\\text{res}_\\vp$ depends mostly on radial motion of the source,\nthough the polar position becomes important near pericenter.\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.89\\textwidth]{Figures\/ssf_t_contour_save}\n\t\\caption{Time component of the SSF $\\hat{F}_t$ projected on the poloidal motion two-torus for the six\nsources listed in Table~\\ref{tab:orbits}. The SSF is normalized by the cube of each source's pericenter distance.\nColors correspond to values of the self-force (see colorbar). The self-force is constant along each (solid) contour\nline with tic labels in the colorbar giving the values on those contours. The dot-dashed lines depict the resonant\nmotion for fiducial initial conditions ($\\bar{q}_0=0$).}\n\\label{fig:ssfTTorus}\n\\end{figure*}\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.89\\textwidth]{Figures\/ssf_r_contour_save}\n\t\\caption{Radial component of the scalar self-force $\\hat{F}_r$ for the six orbits listed in\nTable~\\ref{tab:orbits}.}\n\t\\label{fig:ssfRTorus}\n\\end{figure*}\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.9\\textwidth]{Figures\/ssf_th_contour_save}\n\t\\caption{Polar component of the scalar self-force $\\hat{F}_\\th$ for the six orbits listed in\n\tTable~\\ref{tab:orbits}.}\n\t\\label{fig:ssfThTorus}\n\\end{figure*}\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.9\\textwidth]{Figures\/ssf_ph_contour_save}\n\t\\caption{Azimuthal component of the scalar self-force $\\hat{F}_\\vp$ for the six orbits listed in\n\tTable~\\ref{tab:orbits}.}\n\t\\label{fig:ssfPhTorus}\n\\end{figure*}\n\n\\subsection{Scalar self-force as a function of $q_r$ and $q_\\th$}\n\\label{sec:FresQrQth}\n\nAn alternative way to visualize the dependence of the SSF on the radial and polar motion of resonant orbits is\nto project the SSF components onto the two-torus spanned by $q_r+q_{r0}$ and $q_\\th+q_{\\th 0}$. This projection\nis depicted in Figs.~\\ref{fig:ssfTTorus}, \\ref{fig:ssfRTorus}, \\ref{fig:ssfThTorus}, and \\ref{fig:ssfPhTorus}.\nWe again weight the SSF components by the cube of the pericentric radius of the orbit. The dot-dashed (black) lines\ntrace the motion of an orbit with initial conditions $\\b_\\th \\bar{q}_0=q_{\\th 0}=0$. Sampling the SSF as the source\nmoves along these tracks reproduces the black dot-dashed curves in Figs.~\\ref{fig:ssfRVar}, \\ref{fig:ssfTVar},\n\\ref{fig:ssfThVar}, and \\ref{fig:ssfPhVar}. Maintaining previous notation, we refer to the SSF parametrized by\n$q_r+q_{r0}$ and $q_\\th+q_{\\th 0}$ as $\\hat{F}_\\a$.\n\nAs observed in Sec.~\\ref{sec:FresQQ0}, $\\hat{F}_t$ (shown in Fig.~\\ref{fig:ssfTTorus}) primarily depends on the\nradial motion, with little variation as the orbit advances along the $q_\\th$ axis. As the contours show in\nFigs.~\\ref{fig:ssfRTorus} and \\ref{fig:ssfPhTorus}, the radial and azimuthal components, $\\hat{F}_r$ and\n$\\hat{F}_\\vp$ are sensitive to both the radial and polar motion, especially near pericenter. Finally, the contours\nof the SSF seen in Fig.~\\ref{fig:ssfThTorus} clearly demonstrate the antisymmetry across the equatorial plane\nof $\\hat{F}_\\th$, as discussed in the previous section.\n\nIn agreement with previous investigations \\cite{WarbBara10,WarbBara11,Warb15,NasiOsbuEvan19} of the SSF, we see\nthat $\\hat{F}_t$ is strictly positive. This contrasts with the gravitational self-force case where the time component\ncan become negative in both radiation and Lorenz gauge \\cite{Vand16,Vand18}.\\footnote{We do not try to draw any\nphysical interpretation from this behavior since the SSF and gravitational self-force are coordinate and (in the\nGSF case) gauge-dependent results.} On the other hand, $\\hat{F}_r$ is predominantly negative across the entire\ntorus, though it becomes slightly positive near apocenter. This behavior is consistent with the observation\n\\cite{WarbBara10} that higher black hole spin leads to an attractive radial SSF. Large inclinations, on the other\nhand, lead predominantly to positive values of the SSF, as seen in SSF results for spherical orbits \\cite{Warb15}.\nHowever, those prior observations involved inclinations $x\\gtrsim 0.5$, which we did not consider here.\n\nInterestingly, while all of the SSF components peak in magnitude following pericenter passage, the magnitude of\nthese peaks grows for $\\hat{F}_t$ and $\\hat{F}_r$ as $r_\\text{min}$ decreases, while the peaks grow for\n$\\hat{F}_\\th$ and $\\hat{F}_\\vp$ as $r_\\text{min}$ increases. The latter behavior is actually due to the factor of\n$r_\\text{min}^3$. If one removes this weighting, then closer pericenter passages excite larger peaks in the SSF\nfor all components. This suggests that the leading order behavior of $\\hat{F}_\\th$ and $\\hat{F}_\\vp$ is closer\nto $1\/r^2$, which one might expect based on dimensional analysis ($[F_{\\th,\\vp}\/F_{t,r}]_\\text{dim}\n\\sim [M]_\\text{dim}$).\n\n\\section{Evolution of the orbital constants}\n\\label{sec:flux}\n\n\\subsection{Overview}\n\nIn the presence of radiative losses and the self-force, the ordinarily constant quantities $\\mathcal{E}$,\n$\\mathcal{L}_z$, and $\\mathcal{Q}$ are perturbed and gradually evolve according to\n\\begin{gather}\n\\label{eqn:edot}\n\t\\dot{\\mathcal{E}} = - \\frac{q^2}{u^t} a_t, \\qquad \\qquad\n\t\\dot{\\mathcal{L}}_z = \\frac{q^2}{u^t} a_\\vp,\n\t\\\\ \\label{eqn:qdot}\n\t\\dot{\\mathcal{Q}}=\n\t \\frac{2}{u^t} \\left[q^2 K_{\\mu\\nu}u^\\mu a^\\nu - (\\mathcal{L}_z\n\t-a\\mathcal{E})(\\dot{\\mathcal{L}}_z-a\\dot{\\mathcal{E}})\n\t\\right],\n\\end{gather}\nwhere an overdot represents a derivative with respect to Boyer-Lindquist time, and the self-acceleration $a^\\mu$ is given\nby $\\mu a^\\nu = (g^{\\mu\\nu}+u^\\mu u^\\nu)F_\\mu = F^\\nu - q^{-2}u^\\nu d\\mu\/d\\tau$. Note that the lack of orthogonality between\n$F_\\nu$ and $u^\\nu$ drives changes in the mass $\\mu$ (see\n\\ref{app:RestMass}).\n\nThe changes $\\dot{\\mathcal{E}}$, $\\dot{\\mathcal{L}}_z$, and $\\dot{\\mathcal{Q}}$ consist of both secularly growing\nand oscillating parts, with the secular piece found by orbit-averaging \\eqref{eqn:edot} and \\eqref{eqn:qdot} with\nrespect to $t$. For a nonresonant orbit, the averaging is over a long timescale,\n\\begin{equation} \\label{eqn:orbitAvg}\n\t\\langle \\dot{\\mathcal{X}} \\rangle \\equiv \\lim_{T\\rightarrow \\infty}\n\t \\frac{1}{T} \\int_0^T \\dot{\\mathcal{X}} dt,\n\t\\qquad \\mathcal{X} =\n\t\\mathcal{E},\\, \\mathcal{L}_z,\\, \\mathcal{Q} .\n\\end{equation}\nThese averages produce the leading order, adiabatic evolution of the system \\cite{HindFlan08}. The time integrals\ncan be reexpressed in terms of the angle variables that are used to parametrize the self-force. Then the averaging\nis done over the motion on the torus \\cite{DrasHugh04,GrosLeviPere13}. For nonresonant orbits, $\\dot{\\mathcal{E}}$,\n$\\dot{\\mathcal{L}}_z$, and $\\dot{\\mathcal{Q}}$ are averaged over the entire two-torus by integrating with equal weight\nover all $q_r$ and $q_\\th$. For resonances, these orbit-averages are carried out over a single, one-dimensional\nclosed track on the torus, reducing \\eqref{eqn:orbitAvg} to a single integral over the resonant phase variable\n$\\bar{q}$,\n\\begin{align} \\label{eqn:work}\n\t\\mu \\langle \\dot{\\mathcal{E}} \\rangle & =\n\t- \\frac{q^2}{\\G} \\int_0^{2\\pi} \\frac{d\\bar{q}}{2\\pi}\\,\n\t\\bar{\\Sig}_p\\,\n\t\\bar{F}^\\text{res}_t = q^2\\mathcal{W},\n\t\\\\ \\label{eqn:torque}\n\t\\mu \\langle \\dot{\\mathcal{L}}_z \\rangle &=\n\t\\frac{q^2}{\\G} \\int_0^{2\\pi} \\frac{d\\bar{q}}{2\\pi}\\,\n\t\\bar{\\Sig}_p\\,\n\t\\bar{F}^\\text{res}_\\vp = q^2\\mathcal{T},\n\t\\\\ \\label{eqn:qdotAvg}\n\t\\mu \\langle \\dot{\\mathcal{Q}} \\rangle &=\n\t2q^2\\Big[- (\\mathcal{L}_z\n\t-a\\mathcal{E})(\\dot{\\mathcal{T}}-a\\dot{\\mathcal{W}})\n\t\\\\\n\t& \\qquad \\qquad \\; +\n\t\\frac{1}{\\G} \\int_0^{2\\pi} \\frac{d\\bar{q}}{2\\pi}\\,\n\t\\bar{\\Sig}_p\\,\\bar{K}_p^{\\mu\\nu}\n\t\\bar{u}_\\mu \\bar{F}^\\text{res}_\\nu \\Big] .\n\t\\notag\n\\end{align}\nIn the expressions above, all quantities with an overbar are understood to be functions of $\\bar{q}$ and\nparametrized by $\\bar{q}_0$ [e.g.,\n$\\bar{\\Sig}_p=\\bar{\\Sig}(\\bar{q}; \\bar{q}_0)=\\bar{r}_p^2(\\bar{q})+a^2\\cos^2\\bar{\\th}_p(\\bar{q}+\\bar{q}_0)$].\nThe changes $\\langle \\dot{\\mathcal{E}} \\rangle$ and $\\langle \\dot{\\mathcal{L}}_z \\rangle$ are directly related to\nthe average rate of work $\\mathcal{W}$ and torque $\\mathcal{T}$ done on the small body by the SSF (per charge squared)\nand incorporate the fact that the average change in $\\mu$ vanishes (see \\ref{app:RestMass}).\n\nFor nonresonant orbits the conservative components of the self-force vanish when averaged over the entire torus.\nThis fact can be seen from the symmetries of \\eqref{eqn:consSSF} and \\eqref{eqn:ssfRet2Adv}, combined with the\nexpressions for $\\dot{\\mathcal{E}}$, $\\dot{\\mathcal{L}}_z$, and $\\dot{\\mathcal{Q}}$. Only the dissipative\nself-force contributes to the leading order adiabatic evolution of the system when it is off resonance. When on\nresonance, we cannot make use of these same symmetries to discard the conservative component of the self-force in\n\\eqref{eqn:work}, \\eqref{eqn:torque}, and \\eqref{eqn:qdotAvg}. However, flux-balance conditions do confirm that\nconservative contributions to $\\langle \\dot{\\mathcal{E}}\\rangle$ and $\\langle \\dot{\\mathcal{L}}_z\\rangle$ continue\nto vanish, as we further discuss in Sec.~\\ref{sec:EnAndLz}. Additionally, the averages over an $r\\th$ resonance\nretain their dependence on $\\bar{q}_0$, meaning that they vary according to the initial phase at which the system\nenters a resonance, as demonstrated previously \\cite{FlanHughRuan14,BerrETC16}. Thus different initial conditions\ncan either diminish or enhance the averaged evolution of $\\mathcal{E}$, $\\mathcal{L}_z$, and $\\mathcal{Q}$ during\na resonance. The following subsections detail this behavior in the scalar case and provide numerical data on how\nthe conservative and dissipative components of the SSF contribute to $\\langle \\dot{\\mathcal{E}} \\rangle$,\n$\\langle \\dot{\\mathcal{L}}_z \\rangle$, and $\\langle \\dot{\\mathcal{Q}} \\rangle$.\n\n\\subsection{Energy and angular momentum changes for a resonant orbit}\n\\label{sec:EnAndLz}\n\nFlux-balance equates the average changes in the orbital energy and angular momentum,\n$\\langle \\dot{\\mathcal{E}} \\rangle$ and $\\langle \\dot{\\mathcal{L}}_z \\rangle$, to the average radiative fluxes\n\\cite{Galt82,QuinWald99,Mino03}. For energy, the average work $\\mathcal{W}$ done by the SSF balances the total flux\n$\\langle \\dot{E}\\rangle^\\mathrm{tot}$ radiated by the scalar field to infinity and down the horizon, with the\non-resonance fluxes having slightly modified expressions\n\\begin{align}\n\\label{eqn:fluxEBalance}\n\t-\\mathcal{W} &= \\langle \\dot{E} \\rangle^\\mathrm{tot} \\equiv\n\t\\langle \\dot{E} \\rangle^\\mathcal{H}\n\t+ \\langle \\dot{E} \\rangle^\\infty,\n\t\\\\\n\t\\langle \\dot{E} \\rangle^\\mathcal{H} & = \\frac{1}{4\\pi}\n\t\\sum_{\\hat{l} = 0}^\\infty \\sum_{m=-\\hat{l}}^{\\hat{l}} \\sum_{N=-\\infty}^\\infty\n\t\\o_{mN} \\g_{mN} |\\bar{C}^-_{\\hat{l} mN}|^2,\n\t\\\\\n\t\\langle \\dot{E} \\rangle^\\infty & = \\frac{1}{4\\pi}\n\t\\sum_{\\hat{l} = 0}^\\infty \\sum_{m=-\\hat{l}}^{\\hat{l}} \\sum_{N=-\\infty}^\\infty\n\t\\o_{mN}^2 |\\bar{C}^+_{\\hat{l} mN}|^2 .\n\\end{align}\nHere $\\langle \\dot{E} \\rangle^\\mathcal{H}$ is the energy flux (per charge squared) through the horizon, and\n$\\langle \\dot{E} \\rangle^\\infty$ is the energy flux (per charge squared) at infinity, with\n$\\g_{mN}\\equiv \\o_{mN} - ma\/(2Mr_+)$ being the spatial frequency of the radial modes at the horizon and\n$r_+ \\equiv M+\\sqrt{M^2-a^2}$ denoting the radius of the outer horizon. In the resonant case, the fluxes include\na single sum over the net harmonic number $N$ of the net amplitudes $\\bar{C}^\\pm_{\\hat{l} mN}$. The net amplitudes\nare themselves sums \\eqref{eqn:ClmN2Clmkn} over amplitudes $\\hat{C}^\\pm_{\\hat{l} mkn}$ with individual radial and\npolar harmonic numbers $n$ and $k$. These underlying sums reflect the coherence between all harmonics of the\nradial and polar librations that contribute to a given $N$. In this way, interference terms appear in the flux\nthat would otherwise average to zero in the off-resonance case.\n\n\\begin{figure}[tb]\n\t\\centering\n\t\\includegraphics[width=0.42\\textwidth]{Figures\/enLzFlux_save.pdf}\n\t\\caption{Residual variations in the energy (top panel) and angular\n\tmomentum (bottom panel) fluxes as a function of the initial phase\n\t$q_{\\th 0}=\\b_\\th \\bar{q}_0$ for the $e02.23$ (solid black curves),\n\t$e05.23$ (dot-dashed blue curves), and\n\t$e05.12$ (dotted red curves) orbits in in Table \\ref{tab:orbits}.\n\t}\n\t\\label{fig:fluxRes}\n\\end{figure}\n\n{\\renewcommand{\\arraystretch}{1.75}\n\\begin{table*}[!bthp]\n\\caption{Energy and angular momentum fluxes for the resonant-orbit models listed in Table \\ref{tab:orbits}.\nFluxes through the horizon, $\\langle\\dot{E}\\rangle^\\mathcal{H}$ and $\\langle\\dot{L}_z\\rangle^\\mathcal{H}$,\nand infinity, $\\langle\\dot{E}\\rangle^\\infty$ and $\\langle\\dot{L}_z\\rangle^\\infty$, are included. Each model contains\na row of fluxes for an orbit with initial phase $q_{\\th 0} = \\b_\\th\\bar{q}_0=0$ and a row of fluxes for an orbit with\n$q_{\\th 0} = \\b_\\th\\bar{q}_0= -\\pi\/2$. A third row in each case shows the $\\bar{q}_0$ averages of the fluxes as\ndefined by \\eqref{eqn:fluxAvg}, which ignores constructive and destructive interference terms. The reported\nprecision in each flux indicates the accuracy of each calculation (though we truncate more accurate results at nine\ndecimal places). The total fluxes are also compared to the local work and torque due to the SSF, $\\mathcal{W}$ and\n$\\mathcal{T}$, to illustrate the (orbit-averaged) fractional errors in the flux balance relations. These errors\nrange from $\\sim 10^{-11}$ to $\\sim 10^{-5}$, reflecting the numerical accuracy of our SSF results.}\n\t\\label{tab:fluxes}\n\t\\centering\n\t\\begin{tabular*}{\\textwidth}{c @{\\extracolsep{\\fill}} c c c c c c c}\n\t\t\\hline\n\t\t\\hline\n\t\t\\vspace{-10pt}\n\t\t\\\\\n\t\tModel\n\t\t& $\\b_\\th\\bar{q}_0$\n\t\t& $\\langle\\dot{E}\\rangle^\\mathcal{H}\\times {10^5}$\n\t\t& $\\langle\\dot{L}_z\\rangle^\\mathcal{H}\\times {10^4}$\n\t\t& $\\langle\\dot{E}\\rangle^\\infty\\times {10^3}$\n\t\t& $\\langle\\dot{L}_z\\rangle^\\infty\\times {10^3}$\n\t\t& $\\left|1+\\frac{\\langle\\dot{E}\\rangle^\\text{tot}}{\\mathcal{W}}\\right|$\n\t\t& $\\left|1+\\frac{\\langle\\dot{L}_z\\rangle^\\text{tot}}\n\t\t{\\mathcal{T}}\\right|$\n\t\t\\\\\n\t\t\\vspace{-10pt}\n\t\t\\\\\n\t\t\\hline\n\t\n\t\t$e02.13$ & $0$ & $-4.411457095$ & $-7.017966266$\n\t\t& $1.301535\\phantom{000}$ & $7.677846\\phantom{000}$ & $9\\times 10^{-7}$ &\n\t\t$8\\times 10^{-7}$\n\t\t\\\\\n\t\t& $-\\pi\/2$ & $-4.411497781$ & $-7.017992874$\n\t\t& $1.301534\\phantom{000}$ & $7.677831\\phantom{000}$ & $7\\times 10^{-7}$ &\n\t\t$6\\times 10^{-7}$\n\t\t\\\\\n\t\t& avg & $-4.411477437$ & $-7.017979570$\n\t\t& $1.301535\\phantom{000}$ & $7.677838\\phantom{000}$ & - & -\n\t\t\\\\ \\hline\n\t\t$e02.12$ & 0 & $-2.021123696$ & $-3.395925026$\n\t\t& $0.5737075\\phantom{00}$ & $4.843929\\phantom{000}$ & $2\\times 10^{-8}$ &\n\t\t$2\\times 10^{-8}$\n\t\t\\\\\n\t\t& $-\\pi\/2$ & $-2.021127357$ & $-3.396083610$\n\t\t& $0.5736988\\phantom{00}$ & $4.843824\\phantom{000}$ & $2\\times 10^{-8}$ &\n\t\t$2\\times 10^{-8}$\n\t\t\\\\\n\t\t& avg & $-2.021125529$ & $-3.396004318$\n\t\t& $0.5737031\\phantom{00}$ & $4.483877\\phantom{000}$ & - & -\n\t\t\\\\ \\hline\n\t\t$e02.23$ & $0$ & $-0.324830139$ & $-0.877896699$\n\t\t& $0.134964247$ & $1.762343845$ & $6\\times 10^{-9}$ &\n\t\t${}^{\\phantom{1}}3\\times 10^{-11}$\n\t\t\\\\\n\t\t& $-\\pi\/2$ & $-0.325170299$ & $-0.877675562$\n\t\t& $0.134984611$ & $1.762586419$ & $6\\times 10^{-9}$ &\n\t\t${}^{\\phantom{1}}3\\times 10^{-11}$\n\t\t\\\\\n\t\t& avg & $-0.325000220$ & $-0.877786129$\n\t\t& $0.134974429$ & $1.762465132$ & - & -\n\t\t\\\\ \\hline\n\t\t$e05.13$ & $0$ & $-0.482340196$ & $-6.445882073$\n\t\t& $1.45739\\phantom{0000}$ & $7.28846\\phantom{0000}$ & $9\\times 10^{-5}$ &\n\t\t$8\\times 10^{-5}$\n\t\t\\\\\n\t\t& $-\\pi\/2$ & $-0.480744834$ & $-6.448440589$\n\t\t& $1.45724\\phantom{0000}$ & $7.28681\\phantom{0000}$ & $9\\times 10^{-5}$ &\n\t\t$8\\times 10^{-5}$\n\t\t\\\\\n\t\t& avg & $-0.481543289$ & $-6.447161427$\n\t\t& $1.45731\\phantom{0000}$ & $7.28763\\phantom{0000}$ & - & -\n\t\t\\\\ \\hline\n\t\t$e05.12$ & $0$ & $-0.364726314$ & $-2.915401042$\n\t\t& $0.590229\\phantom{000}$ & $3.85672\\phantom{0000}$ & $3\\times 10^{-5}$ &\n\t\t$2\\times 10^{-5}$\n\t\t\\\\\n\t\t& $-\\pi\/2$ & $-0.361475840$ & $-2.919310921$\n\t\t& $0.589990\\phantom{000}$ & $3.85408\\phantom{0000}$ & $3\\times 10^{-5}$ &\n\t\t$2\\times 10^{-5}$\n\t\t\\\\\n\t\t& avg & $-0.363102237$ & $-2.917355986$\n\t\t& $0.590110\\phantom{000}$ & $3.85540\\phantom{0000}$ & - & -\n\t\t\\\\ \\hline\n\t\t$e05.23$ & $0$ & $\\phantom{-}0.092800200$ &\n\t\t$-0.713304108$\n\t\t& $0.12832694\\phantom{0}$ & $1.3990094\\phantom{00}$ & $3\\times 10^{-7}$ &\n\t\t$2\\times 10^{-7}$\n\t\t\\\\\n\t\t& $-\\pi\/2$ & $\\phantom{-}0.090053088$ & $-0.710893569$\n\t\t& $0.12855813\\phantom{0}$ & $1.4017751\\phantom{00}$ & $3\\times 10^{-7}$ &\n\t\t$2\\times 10^{-7}$\n\t\t\\\\\n\t\t& avg & $\\phantom{-}0.091426409$ & $-0.712098512$\n\t\t& $0.12844253\\phantom{0}$ & $1.4003923\\phantom{00}$ & - & -\n\t\t\\\\\n\t\t\\hline\n\t\t\\hline\n\t\\end{tabular*}\n\\end{table*}\n}\n\nIn a similar way the average torque $\\mathcal{T}$ applied by the SSF balances the sum of the angular momentum flux\nat the horizon $\\langle \\dot{L}_z \\rangle^\\mathcal{H}$ and at infinity $\\langle \\dot{L}_z \\rangle^\\infty$,\n\\begin{align}\n\\label{eqn:fluxLzBalance}\n\t-\\mathcal{T} &= \\langle \\dot{L}_x \\rangle^\\mathrm{tot} \\equiv\n\t\\langle \\dot{L}_z \\rangle^\\mathcal{H}\n\t+ \\langle \\dot{L}_z \\rangle^\\infty,\n\t\\\\\n\t\\langle \\dot{L}_z \\rangle^\\mathcal{H} & = \\frac{1}{4\\pi}\n\t\\sum_{\\hat{l} = 0}^\\infty \\sum_{m=-\\hat{l}}^{\\hat{l}} \\sum_{N=-\\infty}^\\infty\n\tm \\g_{mN} |\\bar{C}^-_{\\hat{l} mN}|^2,\n\t\\\\ \\label{eqn:fluxLzBalanceLast}\n\t\\langle \\dot{L}_z \\rangle^\\infty & = \\frac{1}{4\\pi}\n\t\\sum_{\\hat{l} = 0}^\\infty \\sum_{m=-\\hat{l}}^{\\hat{l}} \\sum_{N=-\\infty}^\\infty\n\tm\\o_{mN} |\\bar{C}^+_{\\hat{l} mN}|^2 .\n\\end{align}\nRecall from \\eqref{eqn:ClmN2Clmkn} that the net amplitude $\\bar{C}^\\pm_{\\hat{l} mN}= \\bar{C}^\\pm_{\\hat{l} mN}(\\bar{q}_0)$\nis a function of $\\bar{q}_0$, which captures the effect on the fluxes of the phase of the resonant orbit.\n\nIn a numerical calculation the fluxes tend to converge exponentially with increasing numbers of modes and only\nrequire calculation of the matching coefficients $\\bar{C}^\\pm_{\\hat{l} mN}$, not the full SSF. The effect is that the\nfluxes can usually be computed to high accuracy. In Table \\ref{tab:fluxes} we report numerical values for the\ntotal fluxes (at infinity and the horizon) for two different phase parameters, $q_{\\th 0} = \\b_\\th \\bar{q}_0 = 0$\nand $-\\pi\/2$, and for each of the models outlined in Table \\ref{tab:orbits}. Consistent with calculations of\ngravitational fluxes \\cite{Misn72}, most of the horizon fluxes are negative due to superradiant scattering (each\nmodel has primary spin of $a\/M = 0.9$). As expected, orbits with smaller pericentric distances $r_\\text{min}$ tend\nto produce larger fluxes, while eccentricity has a smaller effect.\n\nIn Table \\ref{tab:fluxes} we also list the computed average over $\\bar{q}_0$ of the resonant-orbit fluxes\n\\begin{equation} \\label{eqn:fluxAvg}\n\t\\langle\\langle \\dot{X}\\rangle\\rangle_{\\bar{q}_0}\n\t\\equiv \\frac{1}{2\\pi}\\int_0^{2\\pi} \\langle \\dot{X} \\rangle\\;\n\td\\bar{q}_0,\n\\end{equation}\nwhere $X = E, L_z$. This average over $\\bar{q}_0$ (i.e., a double averaging) gives the flux that would be seen in\na system with nearly the same orbital parameters but infinitesimally off resonance so that its motion ergodically\nfills the torus. It is equivalent to computing the fluxes with the normal incoherent sum of terms with\n$\\vert \\hat{C}^\\pm_{\\hat{l} mkn} \\vert^2$ over all $n$ and $k$. With \\eqref{eqn:fluxAvg} giving a background average,\nwe can define the residual variation (enhancement or diminishment) in the fluxes that arise on resonance,\n\\begin{equation} \\label{eqn:resVar}\n\t\\langle \\d \\dot{X} \\rangle \\equiv\n\t\\langle \\dot{X} \\rangle - \\langle\\langle\n\t\\dot{X}\\rangle\\rangle_{\\bar{q}_0}.\n\\end{equation}\nWe plot $\\langle \\d\\dot{E}\\rangle^\\mathrm{tot}$ and\n$\\langle \\d\\dot{L}_z\\rangle^\\mathrm{tot}$ for the $e02.12$ (solid black lines), $e02.23$ (dashed red lines), and\n$e05.23$ (dot dashed lines) orbits in Fig.~\\ref{fig:fluxRes}. By comparing these figures to the values in Table\n\\ref{tab:fluxes}, we see that the residual variations are relatively small compared to the magnitudes of the total\nfluxes, but they are still greater than the $\\sim 10^{-8}$ fractional error in our numerical calculations.\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=0.9\\textwidth]{Figures\/workTorque_allOrbits_save.pdf}\n\t\\caption{Separate contributions from the dissipative and conservative components of the self-force to the\n\tresidual variations $\\langle \\d \\dot{\\mathcal{E}} \\rangle$ in energy (top row) and\n\t$\\langle \\d \\dot{\\mathcal{L}}_z \\rangle$ in angular momentum (bottom row) for the $e02.12$ (left),\n\t$e02.23$ (middle), and $e05.23$ (right) orbits plotted as functions of the resonant-orbit phase.\n\tContributions from the dissipative self-force are given by the solid (black) curves, while contributions from\n\tthe conservative self-force are depicted by the dashed (red) curves. The (grey) shaded region represents the\n\tformal uncertainty in our calculation of the conservative SSF, $\\langle \\d \\dot{\\mathcal{Q}} \\rangle^\\mathrm{cons} \\pm \\s_\\mathrm{cons}$, due to fitting for\n\thigher-order regularization parameters. The calculation of $\\s_\\mathrm{cons}$ is discussed below\n\t(in Sec.~\\ref{sec:EnAndLz}). As expected, the residual variations in energy and angular momentum arising from\n\tthe conservative SSF (dashed\/red curves) are consistent with zero.\n\t}\n\t\\label{fig:workTorque}\n\\end{figure*}\n\nIn the case of the 2:3 resonances plotted as dashed (red) and dot-dashed (blue) curves in Fig.~\\ref{fig:fluxRes},\nthe energy and angular momentum fluxes are minimized when the motion possesses simultaneous turning points in $r$\nand $\\th$ (i.e., $\\b_{\\th}\\bar{q}_{0}=0,\\pi$). While not plotted here, the other 2:3 resonance ($e05.23$) shares\nthis behavior. On the other hand, for the 1:2 resonance in Fig.~\\ref{fig:fluxRes}, the energy and angular momentum\nfluxes are maximized when the motion possesses simultaneous turning points, a feature which is shared by the other\n1:2- and 1:3-resonant orbits. Evidence of this behavior can also be found in Table \\ref{tab:fluxes}. Following\nthe work of \\cite{FlanHughRuan14}, we also report in Appendix \\ref{app:fracVarFlux} the total fractional variations\n$\\D \\dot{E}$ and $\\D \\dot{L}_z$ as defined by (4.5) in \\cite{FlanHughRuan14}.\n\nAdditionally, we can make use of the flux-balance laws to test the accuracy of our SSF data. The fractional errors\nbetween the fluxes and the work $\\mathcal{W}$ and torque $\\mathcal{T}$, computed via \\eqref{eqn:fluxEBalance}\nand \\eqref{eqn:fluxLzBalance}, are given in the last two columns of Table \\ref{tab:fluxes}. We find good agreement,\nwith fractional errors of $\\sim 10^{-11}$-$10^{-5}$. Recalling the numerical convergence of the SSF displayed in\nFig.~\\ref{fig:convergenceFph}, these fractional errors are in line with the predicted numerical accuracy of our\nSSF data.\n\nThe good agreement between our flux and SSF results is a further way of seeing that the conservative component of\nthe SSF does not contribute to $\\langle \\dot{\\mathcal{E}} \\rangle$ and $\\langle \\dot{\\mathcal{L}}_z \\rangle$. The\nfluxes are \\emph{purely dissipative} quantities, and for the flux-balance laws to hold, only the dissipative component\nof the SSF can contribute to the averages $\\langle \\dot{\\mathcal{E}} \\rangle$ and\n$\\langle \\dot{\\mathcal{L}}_z \\rangle$,\neven during resonances \\cite{IsoyETC19}.\n\nTo verify this, we calculate separately the contributions of the\ndissipative and conservative SSF to the residual variations $\\langle \\d \\dot{\\mathcal{E}} \\rangle$ and\n$\\langle \\d \\dot{\\mathcal{L}}_z \\rangle$ by replacing $\\bar{F}_\\a^\\mathrm{res}$ with $\\bar{F}_\\a^\\mathrm{diss}$\nand $\\bar{F}_\\a^\\mathrm{cons}$ in \\eqref{eqn:work} and \\eqref{eqn:torque}.\nIn Fig.~\\ref{fig:workTorque} we plot the residual variations for the $e02.12$ (left), $e02.23$ (middle), and\n$e05.23$ (right) orbits. The solid (black) curves correspond to the dissipative contributions, which share the same\nvarying behavior as seen in Fig.~\\ref{fig:fluxRes} [note the opposite sign in \\eqref{eqn:fluxEBalance} and\n\\eqref{eqn:fluxLzBalance}]. The dashed (red) curves correspond to the conservative contributions, and the filled\n(grey) regions plot the estimated uncertainty of the conservative contributions due to truncation of the\nregularization procedure, which affects the conservative SSF.\n\nThe uncertainty in the conservative contributions originates in our mode-sum regularization of the SSF. As summarized\nin Sec.~\\ref{sec:ssfReg} [and discussed with additional detail in \\cite{NasiOsbuEvan19}, Sec.~IV A, the paragraph\nfollowing Eq.~(4.11)], we regularize our SSF data via mode-sum regularization, but must numerically fit for\nhigher-order regularization parameters in order to improve the numerical convergence of our $l$-mode sum. Without\nextrapolating these higher-order terms, our regularized results would be dominated by truncation errors of\n$O(l_\\mathrm{max}^{-1})$. Our extrapolation procedure typically enhances truncation error scaling to\n$O(l_\\mathrm{max}^{-7})$ but it also introduces systematic uncertainties, since our extrapolated results depend on\nhow many terms we include in our numerical fits and which multipole SSF modes we use to produce those fits. These\nsystematic uncertainties tend to be much larger than what the improved truncation error scaling would naively suggest, and\nthus become the dominant form of error in our numerical conservative SSF results.\\footnote{Recall that only the\nconservative component of the SSF needs to be regularized.} Therefore, any quantity that depends on the conservative\nself-force will also have an estimated uncertainty associated with the fitting. We estimate the uncertainty of the\nnewly calculated quantity by propagating uncertainties in additive terms,\ni.e.,\n\\begin{equation}\n\t\\sigma_f^2 = \\left\\vert \\frac{\\partial f}{\\partial x_{0}} \\right\\vert^2\n\t\\sigma_{x_0}^2 + \\left\\vert \\frac{\\partial f}{\\partial x_1} \\right\\vert^2\n\t\\sigma_{x_1}^2 + \\cdots\n\t+ \\left\\vert \\frac{\\partial f}{\\partial x_n} \\right\\vert^2\n\t\\sigma_{x_n}^2 ,\n\\end{equation}\nwhere $\\sigma_f$ is the propagated uncertainty of a quantity $f$ due to its dependence on $n$ (assumed independent)\nparameters $(x_0, x_1, \\dots, x_n)$ with (assumed uncorrelated) errors\n$(\\sigma_{x_0}, \\sigma_{x_1}, \\dots, \\sigma_{x_n})$. For example,\n$\\langle \\d \\dot{\\mathcal{E}} \\rangle^\\mathrm{cons}$ and its uncertainty are explicitly computed via the sums\n\\begin{align}\n\t\\langle \\d \\dot{\\mathcal{E}} \\rangle^\\mathrm{cons}(\\bar{q}_0) &= \\frac{q^2}{N}\n\t\\sum_{n = 0}^{N-1} \\bar{\\Sigma}_p\\left(\\frac{2\\pi i n}{N};\\bar{q}_0\\right)\n\t\\bar{F}^\\mathrm{cons}_t\\left(\\frac{2\\pi i n}{N}; \\bar{q}_0\\right), \\notag\n\t\\\\\n\t\\sigma_\\mathrm{cons}^2(\\bar{q}_0) &= \\frac{q^4}{N^2} \\sum_{n=0}^{N-1}\n\t\\bar{\\Sigma}_p^2\\left(\\frac{2\\pi i n}{N};\\bar{q}_0\\right)\n\t\\sigma_{t}^2\\left(\\frac{2\\pi i n}{N};\\bar{q}_0\\right), \\label{eqn:propErrorEn}\n\\end{align}\nwhere $\\sigma_{t}(\\bar{q};\\bar{q}_0)$ is the estimated uncertainty of $\\bar{F}_t^\\mathrm{cons}(\\bar{q};\\bar{q}_0)$\nfrom our fitting procedure. The first line of \\eqref{eqn:propErrorEn} is obtained by replacing the integrand of\n\\eqref{eqn:work} with its discrete Fourier transform.\\footnote{Recall that $\\langle\\langle \\dot{\\mathcal{E}} \\rangle\\rangle_{\\bar{q}_0}$\nvanishes exactly due to the symmetries of Kerr geodesics, and, thus,\n$\\langle\\d \\dot{\\mathcal{E}}\\rangle^\\mathrm{cons} = \\langle \\dot{\\mathcal{E}}\\rangle^\\mathrm{cons}$.}\n\nNo uncertainty estimates for the dissipative\ncontributions are included, as these are orders of magnitude smaller. While the conservative\npart leaves behind a nonzero numerical result, these variations fall well below our estimated uncertainty and are\nthus consistent with zero, as expected. The estimated uncertainty is much larger for the $e02.12$ model due to the\nslower convergence of the SSF for orbits that lie deeper in the strong field (see Fig.~\\ref{fig:convergenceFph}).\nIn that model, our formal uncertainty far exceeds not only the conservative contributions but even the dissipative\nvariations, which may point to the formal uncertainties being too conservative.\n\n\\subsection{Carter constant and the integrability conjecture}\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{Figures\/Qdot_allOrbits_errors.pdf}\n\t\\caption{Contributions from the dissipative and conservative components of the self-force to\n\t$\\langle \\d \\dot{\\mathcal{Q}}\\rangle$ for the $e02.12$ (left), $e02.23$ (middle), and $e05.23$ (right)\n\torbits. Contributions from the dissipative self-force are given by the solid (black) curves, while\n\tcontributions from the conservative self-force are depicted by the dashed (red) curves. The (grey) shaded\n\tregion represents the formal uncertainty in our calculation of the conservative SSF, $\\langle \\d \\dot{\\mathcal{Q}} \\rangle^\\mathrm{cons} \\pm \\s_\\mathrm{cons}$, due\n\tto fitting for higher-order regularization parameters. The calculation of $\\s_\\mathrm{cons}$ is discussed in detail\n\tin Sec.~\\ref{sec:EnAndLz}.\n\t}\n\t\\label{fig:dQdot}\n\\end{figure*}\n\nUnlike $\\langle \\dot{\\mathcal{E}}\\rangle$ and $\\langle \\dot{\\mathcal{L}}_z\\rangle$, $\\langle \\dot{\\mathcal{Q}}\\rangle$\nis not associated with a radiation flux. Instead, we must directly calculate the orbit-averaged rate of change of\nthe Carter constant from the self-force,\n\\begin{align} \\label{eqn:QdotAvgMino}\n\t\\langle \\dot{\\mathcal{Q}} \\rangle\n\t= \\frac{1}{\\G} \\left\\langle \\Sig \\frac{d\\mathcal{Q}}{d\\tau}\n\t \\right\\rangle_\\la .\n\\end{align}\nHere $\\langle \\mathcal{X}\\rangle_\\la$ refers to an average over $\\mathcal{X}(\\la)$ with respect to Mino time $\\la$,\nand the proper time derivative of $\\mathcal{Q}$ is given by\n\\begin{align}\n\t\\frac{\\mu}{2}\\frac{d\\mathcal{Q}}{d\\tau} &= \\label{eqn:QTauDotTh}\n\tq^2 (\\csc^2\\th \\mathcal{L}_z-a \\mathcal{E})(F_\\vp+a\\sin^2\\th F_t)\n\t\\\\ \\notag\n\t& \\qquad + q^2 u_\\th F_\\th - q^2(\\mathcal{L}_z-a\\mathcal{E})\n\t\\left(F_\\vp+aF_t\\right)\n\t\\\\ \\notag\n\t&\\qquad \\qquad \\qquad \\qquad\n\t- q^2(\\mathcal{Q} - a^2\\cos^2\\th) u^\\a F_\\a,\n\t\\\\\n\t&= \\label{eqn:QTauDotR}\n\tq^2 \\D^{-1}(a \\mathcal{L}_z-\\varpi^2 \\mathcal{E})\n\t\\left(a F_\\vp+\\varpi^2 F_t\\right)\n\t\\\\ \\notag\n\t&\\qquad - q^2 \\D u_r F_r - q^2(\\mathcal{L}_z-a\\mathcal{E})\n\t\\left(F_\\vp+aF_t \\right)\n\t\\\\ \\notag\n\t&\\qquad \\qquad \\qquad \\qquad \\qquad \\qquad\n\t- q^2(\\mathcal{Q}+ r^2 ) u^\\a F_\\a.\n\\end{align}\n(See Appendix \\ref{app:Qdot}.)\n\nFor nonresonant orbits, the conservative contributions to $\\langle \\dot{\\mathcal{Q}} \\rangle$ vanish due to\nsymmetries of the motion. This can be seen by reexpressing \\eqref{eqn:QdotAvgMino} as a two-dimensional integral\nover $q_r$ and $q_\\theta$ \\cite{DrasHugh04,GrosLeviPere13},\n\\begin{equation} \\label{eqn:QdotAverageNonRes}\n\t\\langle \\dot{\\mathcal{Q}} \\rangle = \\frac{1}{\\G} \\int_0^{2\\pi} \\frac{dq_r}{2\\pi}\n\t\\int_0^{2\\pi} \\frac{dq_\\th}{2\\pi} \\left(\\Sig \\frac{d\\mathcal{Q}}{d\\tau}\n\t\\right).\n\\end{equation}\nRecall from \\eqref{eqn:consSSF} and \\eqref{eqn:ssfRet2Adv} that the two components $F^\\mathrm{cons}_{t,\\vp}$ are\nantisymmetric on the $q_r$-$q_{\\th}$ torus, while the other two components $F^\\mathrm{cons}_{r,\\theta}$ are\nsymmetric on the torus, so that\n\\begin{align}\n\tF^\\mathrm{cons}_{t,\\vp}(2\\pi-q_r, 2\\pi-q_\\theta)\n&= - F^\\mathrm{cons}_{t,\\vp}(q_r, q_\\theta),\n\\\\\n\tF^\\mathrm{cons}_{r,\\theta}(2\\pi-q_r, 2\\pi-q_\\theta)\n&= +F^\\mathrm{cons}_{r,\\theta}(q_r, q_\\theta).\n\\end{align}\nHence, if we only make use of the conservative components of the self-force in \\eqref{eqn:QTauDotTh} or\n\\eqref{eqn:QTauDotR}, then $dQ^\\mathrm{cons}\/d\\tau$ is also antisymmetric. Because $\\Sig$ is symmetric with respect\nto the angle variables, the conservative contributions must vanish when integrated over the entire torus in\n\\eqref{eqn:QdotAverageNonRes}. By disregarding these conservative perturbations and relating\n$\\langle \\dot{\\mathcal{Q}}\\rangle$ to the purely radiative (dissipative) piece of the perturbing field\n\\cite{Mino03,SagoETC06}, \\eqref{eqn:QdotAvgMino} reduces to a weighted mode sum over the field's asymptotic\namplitudes \\cite{DrasFlanHugh05,SagoETC06,FlanHughRuan14}, akin to\n\\eqref{eqn:fluxEBalance}-\\eqref{eqn:fluxLzBalanceLast}.\n\nFor $r\\theta$ resonances, the Mino time average reduces to the single integral over $\\bar{q}$ in \\eqref{eqn:qdotAvg},\n\\begin{equation}\n\\label{eqn:QdotAverageRes}\n\t\\langle \\dot{\\mathcal{Q}} \\rangle = \\frac{1}{\\G} \\int_0^{2\\pi}\n\t\\frac{d\\bar{q}}{2\\pi} \\left(\\bar{\\Sig} \\frac{d\\mathcal{Q}}{d\\tau}\n\t\\right) .\n\\end{equation}\nWhile the integrand in the above expression is still antisymmetric with respect to both $q_r$ and $q_\\theta$, it is\nnot, for a general choice of the initial resonant phase $\\bar{q}_0$, antisymmetric with respect to just $\\bar{q}$\n(though there may be special values of $\\bar{q}_0$ where it is). When integrated over a single closed track on the\ntorus, \\eqref{eqn:QdotAverageRes} is not guaranteed to vanish, in contrast with the nonresonant case.\n\nHowever, Flanagan and Hinderer \\cite{FlanHind12} conjecture that dynamics driven by the conservative piece of the\nself-force are always integrable in Kerr spacetime,\\footnote{Conservative perturbations are integrable in\nSchwarzschild spacetime, but their integrability has not been fully demonstrated for Kerr\n\\cite{VineFlan15,FujiETC17}.} and cannot therefore drive secular evolution of the perturbed system through a\nresonance. If this integrability conjecture for conservative perturbations holds true, then $r\\th$-resonant dynamics\nwill be driven purely by the dissipative self-force (at adiabatic order), and $\\langle \\dot{\\mathcal{E}}\\rangle$,\n$\\langle \\dot{\\mathcal{L}}_z\\rangle$, and $\\langle \\dot{\\mathcal{Q}}\\rangle$ can be computed via efficient mode-sum\nexpressions [e.g., \\eqref{eqn:fluxEBalance}-\\eqref{eqn:fluxLzBalanceLast}] during resonant (and nonresonant) motion,\nas demonstrated in \\cite{FlanHughRuan14}.\\footnote{Note that the authors of \\cite{FlanHughRuan14} made no claim about\nthe validity of the integrability conjecture, but instead adopted it as a matter of practicality, since calculations of\nthe conservative GSF were unavailable at that time (Hughes, private communications).}\n\nOn the other hand, Isoyama \\textit{et al.}~\\cite{IsoyETC13,IsoyETC19} also derived mode-sum expressions for\n$\\langle \\dot{\\mathcal{E}}\\rangle$, $\\langle \\dot{\\mathcal{L}}_z\\rangle$, and $\\langle \\dot{\\mathcal{Q}}\\rangle$ during\nresonances using a Hamiltonian formulation, but they found that their expression for $\\langle \\dot{\\mathcal{Q}} \\rangle$\ndepends on the conservative (or what they call the symmetric) component of the perturbed Hamiltonian (for the scalar\ncase, see Eqs.~(49)-(51) in \\cite{IsoyETC13} and for the gravitational case, see Eqs.~(63) and (75) in\n\\cite{IsoyETC19}). Unless this term vanishes upon averaging due to further symmetries, the integrability conjecture\nmust break down during resonances, and conservative perturbations will also drive the adiabatic evolution of $Q$.\n\nThis issue can in principle be tested using numerical modeling. To date numerical calculations of\n$\\langle \\dot{\\mathcal{Q}} \\rangle$ for $r\\th$-resonant orbits \\cite{FlanHughRuan14,RuanHugh14,BerrETC16} have not\nincorporated the full first-order self-force.\\footnote{Note that the results of Isoyama \\textit{et al.}~\\cite{IsoyETC13,IsoyETC19}\nwere purely analytical. No one has made use of the formalisms outlined in \\cite{IsoyETC13,IsoyETC19} to numerically\nevaluate $\\langle\\dot{\\mathcal{Q}}\\rangle$, and our methods differ enough from the Hamiltonian formulation and Green's\nfunction methods of \\cite{IsoyETC13,IsoyETC19} that we cannot easily compare our numerical results to these works.}\nThus there is no numerical evidence to support or negate the integrability conjecture. The scalar self-force model can\npotentially provide some insight. We test\nthe conjecture by measuring the relative contributions of the conservative and dissipative components of the SSF to\n$\\langle \\dot{\\mathcal{Q}} \\rangle$. As we did in Sec.~\\ref{sec:EnAndLz}, we replace $\\bar{F}^\\mathrm{res}_\\a$ in\n\\eqref{eqn:qdotAvg} with a breakdown in terms of $\\bar{F}^\\mathrm{cons}_\\a$ and $\\bar{F}^\\mathrm{diss}_\\a$ to\ncalculate separately the conservative and dissipative contributions to $\\langle \\dot{\\mathcal{Q}}\\rangle$. To compare\nthese quantities, we then calculate their residual variations $\\langle \\d \\dot{\\mathcal{Q}}\\rangle^\\mathrm{cons}$\nand $\\langle \\d \\dot{\\mathcal{Q}}\\rangle^\\mathrm{diss}$ as functions of the resonant-orbit phase using \\eqref{eqn:resVar}.\n\nIn Fig.~\\eqref{fig:dQdot}, we compare the variations $\\langle \\d \\dot{\\mathcal{Q}}\\rangle^\\mathrm{diss}$ (solid\nblack curves) and $\\langle \\d \\dot{\\mathcal{Q}}\\rangle^\\mathrm{cons}$ (dashed red curves) for the $e02.12$ (left),\n$e02.23$ (middle), and $e05.23$ (right) orbits, along with our formal uncertainty estimate $\\s_\\mathrm{cons}$ due to\nfitting for high-order regularization parameters in the conservative component of the self-force. Across all three\norbits, the numerical calculation gives a smoothly varying conservative contribution to\n$\\langle \\d \\dot{\\mathcal{Q}}\\rangle$. The amplitudes of the conservative contributions are slightly smaller but on\nthe same order as the dissipative variations. \\emph{The smooth variations of the conservative contributions and their\ncomparable magnitudes to the dissipative variations strongly suggest that $\\langle \\dot{\\mathcal{Q}}\\rangle$ does depend\non conservative scalar perturbations during a resonance.} However, in each case the conservative contributions from our\nSSF data fall below or nearly within the formal uncertainty estimates. Furthermore, for all three orbits, the estimated\nuncertainty in our calculations of $\\langle \\d \\dot{\\mathcal{Q}}\\rangle^\\mathrm{cons}$ are significantly larger than\nthe estimated uncertainties in $\\langle \\d \\dot{\\mathcal{E}}\\rangle^\\mathrm{cons}$ and\n$\\langle \\d \\dot{\\mathcal{L}}_z\\rangle^\\mathrm{cons}$, as seen from comparing Figs.~\\ref{fig:workTorque} and\n\\ref{fig:dQdot}. In fact, for the $e02.12$ and $e05.23$ orbits, our estimated uncertainty is typically large enough\nthat even if the conservative contribution dominated the variations in the dissipative contributions, they could\nstill be consistent with zero.\n\nIn the case of the $e02.12$ model, the truncation in the regularization of the conservative part of the SSF suggests\na formal uncertainty that is an order of magnitude greater than the numerically determined values of both the\nconservative and dissipative contributions to $\\langle \\d \\dot{\\mathcal{Q}}\\rangle$, and the grey region engulfs\nthe entire left panel of Fig.~\\ref{fig:dQdot}. The size of the uncertainty in this model is similar to what was seen\nin $\\langle \\d \\dot{\\mathcal{E}}\\rangle$ and $\\langle \\d \\dot{\\mathcal{L}}_z \\rangle$ in Fig.~\\ref{fig:workTorque}.\nIn the $e05.23$ orbit, the calculated conservative contribution not only falls consistently below the formal\nuncertainty, but contains high-frequency oscillations that are indicative of numerical noise. This noise appears\nalso in the formal uncertainty estimate itself. In the $e02.23$ model,\n$\\langle \\d \\dot{\\mathcal{Q}}\\rangle^\\mathrm{cons}$ still primarily falls within the formal uncertainty, but is\nclosest, in this case, to being a significant nonzero result. The numerical calculations, in this model at least,\nare close to providing evidence of the integrability conjecture's failure, but a reduction in the\nregularization errors by an order of magnitude or two would be required to be sure.\n\nWhen compared to calculating $\\langle \\d \\dot{\\mathcal{E}}\\rangle^\\mathrm{cons}$ and\n$\\langle \\d \\dot{\\mathcal{L}}_z\\rangle^\\mathrm{cons}$, the estimated uncertainty proves to be much higher when\nanalyzing the evolution of the Carter constant. In part, this is because\n$\\langle \\d \\dot{\\mathcal{Q}}\\rangle^\\mathrm{cons}$ depends on $F_\\th$, which tends to be the most difficult\ncomponent of the SSF to accurately extrapolate with our numerical regularization procedure. The $\\th$ component\ncouples to even higher spheroidal modes than the other SSF components as a result of the window function described\nin Sec.~\\ref{sec:ssf}. If we truncate all mode calculations at a particular $l_\\mathrm{max}$, then we can only\ncalculate the multipoles of $F_\\th$ up to $l_\\mathrm{max}-3$, as seen from \\eqref{eqn:ssfRetMino} and\n\\eqref{eqn:delTheta}. Since the higher $l$ modes are beneficial for extracting the higher-order regularization\nparameters, missing this higher-mode information for $F_\\th$ hampers our ability to fit for its (regularized)\nconservative component. We still are able to calculate $F_\\th$ to 3 or 4 significant digits, which in the\nabsence of resonances is accurate enough to provide its contribution to EMRI evolutions that are phase accurate to\nless than a radian. As this analysis indicates, additional accuracy will be needed to quantify with certainty\npossible contributions from the conservative sector to the secular evolution of the Carter constant. In principle,\nwe could calculate additional modes to improve the accuracy of $F_\\th$. In practice, this is difficult because\nhigher $l$ modes are harder to calculate accurately due to rapid oscillatory behavior of the integrands in the\nsource integrations. This behavior is mirrored in the gravitational case \\cite{Vand16}.\n\nThe plotted uncertainty regions in Fig.~\\eqref{fig:dQdot} provide an estimate of how well we have fit for the\nhigher-order regularization parameters and regularized our self-force data. Any nonvanishing, and potentially\nsmooth, variations within these uncertainty bounds could be a result of residual contributions of the singular field\nthat were not removed during regularization. This issue is unique to averages over $r\\th$-resonant orbit. In the\nnonresonant case, the singular field---much like the conservative contribution---vanishes when averaged over the\nentire two-torus. Even if we do not regularize the conservative component of the SSF,\n$\\langle \\d \\dot{\\mathcal{Q}}\\rangle^\\mathrm{cons}$ will still exactly vanish for nonresonant orbits. On the other\nhand, averages over the singular contributions, or any antisymmetric function on the two-torus, are not guaranteed\nto vanish for an arbitrary $r\\th$ resonance.\n\nWe conclude that our conservative SSF results are not yet accurate enough to show a definitive conflict with the\nintegrability conjecture. That being said, our uncertainty regions may overestimate residual contributions of the\nsingular field that were not properly regularized. If this is the case, then the nondissipative variations that we\nsee in Fig.~\\ref{fig:dQdot} may very well be physical. Further tests of the integrability conjecture are necessary\nbut will require improved regularization of the self-force. This might come from either more extensive numerical\ncalculations and regularization parameter fitting or from the added input of analytically determined higher-order\nregularization parameters.\n\n\\section{Summary}\n\\label{sec:conclusion}\n\nWe considered point scalar charges following $r\\th$-resonant geodesics in Kerr spacetime and calculated, for the\nfirst time, the resulting strong-field SSF experienced by these charges. This work serves as a\nfirst step in understanding the still unquantified behavior of the gravitational self-force during transient orbital\n$r\\th$ resonances. To calculate the SSF we used a \\textsc{Mathematica} code previously developed in\n\\cite{NasiOsbuEvan19}. In constructing our SSF data, we derived a simple shifting relation, \\eqref{eqn:ssfqq0}, that\nallows us to calculate the self-force during $r\\th$ resonances using self-force data that assumes the geodesic\nsources are nonresonant. This mapping provides an efficient method for analyzing the self-force as a function of\nthe initial phase at which a system enters resonance.\n\nWhen calculating the SSF, we focused on six different $r\\th$-resonant orbits: each scalar charge followed a 1:3, 1:2,\nor 2:3 $r\\th$-resonant geodesic and each orbit either had an eccentricity of $0.2$ or $0.5$. The full set of source\nparameters can be found in Table \\ref{tab:orbits}. In Figs.~\\ref{fig:ssfRVar}-\\ref{fig:ssfPhVar}, we demonstrated\nhow varying the initial phase of $r\\th$-resonant orbits impacts the evolution of the self-force. We then projected\nour SSF data onto invariant two-tori in Figs.~\\ref{fig:ssfTTorus}-\\ref{fig:ssfPhTorus} to display the dependence of\nthe self-force on the radial and polar motion of the source, regardless of initial conditions and phases. We\nvalidated our SSF data by analyzing the convergence properties of our regularized self-force multipoles, as shown\nin Fig.~\\ref{fig:convergenceFph}, and by comparing the radiation fluxes to the rate of work and torque done on each\nscalar-charge source by the SSF via flux-balance laws, which are reported in Table \\ref{tab:fluxes}.\n\nWith these novel self-force calculations we also analyzed the impact of the conservative scalar self-force on the\norbit-averaged evolution of the orbital energy $\\langle \\dot{\\mathcal{E}} \\rangle$, $z$ component of the angular\nmomentum $\\langle \\dot{\\mathcal{L}}_z \\rangle$, and Carter constant $\\langle \\dot{\\mathcal{Q}} \\rangle$. As expected\nfrom flux-balance arguments, the contributions to $\\langle \\dot{\\mathcal{E}} \\rangle$ and\n$\\langle \\dot{\\mathcal{L}}_z \\rangle$ from the conservative SSF are negligible and consistent with zero, as shown\nin Fig.~\\ref{fig:workTorque}. On the other hand, our conservative SSF data substantially contribute to\n$\\langle \\dot{\\mathcal{Q}} \\rangle$, as displayed in Fig.~\\ref{fig:dQdot}, though these contributions are on the\norder of, or much less than, our estimated uncertainty. This uncertainty is a result of the numerical regularization\nof the conservative SSF. Because these contributions fall within these uncertainty bounds, we cannot distinguish\nwhether these nondissipative contributions are due to the residual singular field or the regularized conservative\nself-force. If these contributions are in fact from the conservative SSF, then this would indicate that the\nintegrability conjecture proposed by Flanagan and Hinderer \\cite{FlanHind12} breaks down during $r\\th$ resonances,\nand conservative contributions will need to be considered as predicted by Isoyama \\textit{et al.} \\cite{IsoyETC19}. The\npresence of these conservative contributions would then introduce a new numerical challenge for adiabatic\ncalculations: to account for the evolution through resonances, adiabatic codes would need to incorporate\nregularization procedures to accurately quantify $\\langle \\dot{\\mathcal{Q}} \\rangle$. Furthermore, these regularized\ncontributions would need to be known to high levels of precision, at least to $O(\\eps^{-1\/2})$, which we have not\nbeen able to achieve with our current numerical implementations.\n\nIn future work, we will determine whether or not these nondissipative contributions to\n$\\langle \\dot{\\mathcal{Q}} \\rangle$ in the scalar case are physical, or merely systematic errors, by implementing\nalternative, improved regularization schemes and by deriving analytic expressions for the higher-order regularization\nparameters. If the conservative SSF contributions that we have observed are indeed physical, then we hypothesize that\nduring $r\\theta$ resonances the conservative GSF will also contribute to $\\langle \\dot{\\mathcal{Q}} \\rangle$, as first suggested in \\cite{IsoyETC19}. Therefore, we are also constructing a code to calculate the GSF experienced by EMRI systems as they encounter $r\\th$ resonances to test this hypothesis.\n\n\\acknowledgements\n\nThis work was supported by NSF Grant Nos.~PHY-1806447 and PHY-2110335 to the University of North Carolina--Chapel Hill,\nand by the North Carolina Space Grant Graduate Research Fellowship. Z.N.~acknowledges additional support from NSF Grant\nNo.~DMS-1439786 while in residence as a postdoctoral fellow for the Institute for Computational and Experimental\nResearch in Mathematics in Providence, Rhode Island, during the Advances in Computational Relativity semester program.\nZ.N.~also acknowledges support by appointment to the NASA Postdoctoral Program at the Goddard Space Flight Center,\nadministered by Universities Space Research Association under contract with NASA.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe Traveling Salesman Problem (TSP) is the problem of finding a least-cost Hamiltonian cycle in an edge-weighted graph.\nIt is one of the most widely studied hard combinatorial optimization problems.\nFor details on the TSP, we refer to the well-known books~\\cite{ApplegateBCC11, GutinP06, LawlerLRS85, Reinelt94}.\nFor clarity of discussion, we will refer to this problem as the \\emph{linear TSP}.\n\nLet $G = (V,E)$ be a graph (directed or undirected) on the vertex set $ V = \\{1, \\ldots ,n\\}$.\nFor each pair $(e,f)$ of edges in $E$, a cost $q_{ef}$ is prescribed.\nLet $\\mathcal{T}_n$ be the set of all Hamiltonian cycles (tours) in $G$.\nThe quadratic cost $Q[\\tau]$ of a tour $\\tau \\in \\mathcal{T}_n$ is given by $ Q[\\tau] := \\sum \\limits_{(e,f) \\in \\tau \\times \\tau} q_{ef}$.\nThen the \\emph{quadratic traveling salesman problem} (QTSP) is to find a tour $\\tau \\in \\mathcal{T}_n$ such that $Q[\\tau]$ is minimum.\n\nThe problem QTSP received only limited attention in the literature.\nA special case of the \\emph{adjacent quadratic TSP} in which $q_{ef}$ is assumed to be zero if edges $e$ and $f$ are not adjacent, is frequently also called quadratic TSP.\nThis problem has been studied by various authors recently~\\cite{Fischer14, Fischer16, FischerFJKMG14, FischerH13, JagerM08, RostamiMBG16, AichholzerFFMPPS17}.\nWhile QTSP is known to be strongly $\\cplxNP$-hard for Halin graphs, the adjacent quadratic TSP can be solved in $\\orderO{n}$ time on this class of graphs~\\cite{WoodsP17Halin}.\nQTSP defined over the set of pyramidal tours is also known to be strongly $\\cplxNP$-hard~\\cite{WoodsP17PQ}.\nOther special cases of QTSP are studied in~\\cite{WoodsP17} along with various complexity results.\nThe $k$-neighbour TSP~\\cite{WoodsPS17}, Angular metric TSP~\\cite{AggarwalKMS97}, Dubins vehicle TSP~\\cite{SavlaFB08} are also related to QTSP.\n\nFor the rest of this paper, unless otherwise stated, we assume that $G$ is a complete directed graph.\nLet $Q = (q_{ef})$ be the matrix of size $n(n-1)\\times n(n-1)$ consisting of the cost of the edge pairs.\nNote that $(e,e)$ is a permitted pair.\nAn instance of QTSP is completely specified by $Q$.\nThe matrix $Q$ is said to be \\emph{tour-linearizable} (or simply linearizable) if there exists a matrix $C = (c_{ij})$ of size $n \\times n$ such that $Q[\\tau] = C(\\tau)$ for all $\\tau \\in \\mathcal{T}_n$, where $C(\\tau) := \\sum_{(i,j)\\in \\tau} c_{ij}$.\nSuch a cost matrix $C$ is called a \\emph{linearization} of $Q$ for the QTSP.\n\nThe \\emph{QTSP linearization problem} can be stated as follows:\n``Given an instance of QTSP with cost matrix $Q$, verify if it is linearizable and if yes, compute a linearization $C$ of $Q$.''\n\nIt may be noted that since (the decision versions of) the QTSP and the TSP are both $\\cplxNP$-complete problems, there exists a polynomial-time reduction from one problem to the other.\nWe are not attempting to find a reduction from QTSP to TSP for some special cases, although such a result is a byproduct.\nThe question of linearization is more involved and requires the development of structural properties of linearizable matrices.\nUnlike QTSP or TSP, there are no immediate certificates for linearizability or non-linearizability of matrices.\nHence, even containment of the linearization problem in $\\cplxNP$ or $\\cplxCoNP$ is challenging.\n\nLinearization problems have been studied in the literature for various quadratic combinatorial optimization problems~\\cite{AdamsW14, cCelaDW16, CusticSPB17, CusticP18, KabadiP11, LendlP17, PunnenK13, PunnenP18, HuS18, HuS18a} and polynomial-time algorithms are available in all these cases.\nUnfortunately, the simple characterization established for quadratic spanning trees~\\cite{CusticP18} and for various bilinear type problems~\\cite{cCelaDW16, CusticSPB17, LendlP17} do not seem to extend to the case of QTSP.\nThe characterization of linearizable instances of the quadratic assignment problem studied in~\\cite{KabadiP11} have some similarities to the case of QTSP that we study here.\nHowever, to solve the linearization problem for QTSP one needs to overcome more challenges, which makes it an interesting and relevant problem to investigate.\nA family of linearizable cost matrices can be used to obtain strong continuous relaxations of quadratic 0-1 integer programming problems, as demonstrated in~\\cite{HuS18a, PunnenP18}.\nStrong reformulations of quadratic 0-1 integer programming problems are reported in~\\cite{BillionnetEP09, PornNSW17} exploiting linear equality constraints.\nMoreover, strong reformulations of quadratic 0-1 integer programming using quadratic equality constraints are reported in~\\cite{GalliL14}.\nAs observed in~\\cite{PunnenP18}, linearizable cost matrices immediately give redundant quadratic equality constraints and, exploiting the results of~\\cite{GalliL14}, strong reformulations of quadratic combinatorial optimization problems can be obtained.\nFurther, since linearizable cost matrices for any quadratic combinatorial optimization problem form a linear subspace of the space of all $n(n-1) \\times n(n-1)$ matrices, any basis of this subspace can be used to obtain strong reformulations~\\cite{PunnenP18}.\n\nWith this motivation, we study the QTSP linearization problem in this paper.\nNecessary and sufficient conditions are presented for a quadratic cost matrix $Q$ associated with QTSP being linearizable.\nWe show that these conditions can be verified by an $\\orderO{n^5}$-time algorithm that also constructs a linearization of $Q$ whenever one exists.\nHowever, it may be noted that the input size of a general QTSP is $\\orderO{n^4}$ and hence the complexity of our algorithms is almost linear in the input size in the general case.\nOur characterization extends directly to the case of complete undirected graphs.\nFinally, some easily verifiable sufficient conditions for linearizability are also given along with some open problems.\n\nThe terminology ``linearization'' has also been used in a different context where a quadratic integer program is written as a linear integer program by adding new variables and constraints~\\cite{AdamsFG04,Glover75}.\nWe want to emphasize that the concept of linearization used in this paper is different.\n\nWe conclude this section by presenting the most-relevant notation used in the paper.\nLet $N := \\{ 1,2,\\ldots,n\\}$ and $N_i := N \\setminus \\{ i\\}$ for $i \\in N$.\nAll matrices are represented using capital letters (sometimes with superscripts, hats, overbars, etc.) and all elements of the matrix are represented by the corresponding subscripted lower-case letters (along with the corresponding superscripts, hats, overbars, etc.), where the subscripts denoting row and column indices.\nWhen the rows and columns are indexed by elements of $N \\times N$, the $((i,j),(k,l))$'th element of the matrix $Q$ is represented by $q_{ijkl}$, of $Q^R$ by $q^R_{ijkl}$, etc.\nVectors in $\\RR^n$ are represented by bold lower-case letters (sometimes with superscripts, hats, overbars, etc.).\nThe $i$'th component of vector $\\mathbf{a}$ is $a_i$, of vector $\\bar{\\mathbf{b}}$, is $\\bar{b}_i$, etc.\nRows and columns of all matrices of size $n \\times n$ are indexed by $N$, whereas rows and columns of all $n(n-1) \\times n(n-1)$ matrices are indexed by edges of $G$ (i.e., by ordered pairs $(i,j) \\in N \\times N$, $i \\neq j$).\nThe transpose of a matrix $Q$ is denoted by $\\transpose{Q}$.\nThe vector space of all real-valued $n \\times n$ matrices with standard matrix addition and scalar multiplication is denoted by $\\mathbb{M}^n$.\nAccordingly, $\\mathbb{M}^{n(n-1)}$ is the vector space of all $n(n-1) \\times n(n-1)$ matrices.\n\n\\section{The QTSP linearization problem}\n\nLet us first introduce some general properties and definitions used in the paper.\nA matrix $C \\in\\mathbb{M}^n$ satisfies the \\emph{constant value property} (CVP) if there exists a constant $K$ such that $C(\\tau) = K$ for all $\\tau \\in \\mathcal{T}_n$.\nA matrix $C$ is said to be a \\emph{weak sum matrix} if there exist vectors $\\mathbf{a},\\mathbf{b}\\in \\mathbb{R}^n$ such that $c_{ij} = a_i + b_j$ for $i,j \\in N$ with $i \\neq j$.\n\n\\begin{lemma}[\\cite{Berenguer79, Gabovich76, KabadiP03, KabadiP06}]\n A matrix $C \\in \\mathbb{M}^n$ satisfies the CVP if and only if it is a weak sum matrix (i.e., $c_{ij} = a_i + b_j$ for all $i,j \\in N$ with $i \\neq j$).\n In this case, the constant value of the tours is given by $\\sum_{i\\in N}(a_i + b_i)$.\n\\end{lemma}\n\nTwo matrices $Q^1$ and $Q^2 \\in \\mathbb{M}^{n(n-1)}$ are \\emph{equivalent modulo a linear matrix} if there exists a matrix $H \\in \\mathbb{M}^n$ such that $Q^1[\\tau] = Q^2[\\tau] + H(\\tau)$ for all $\\tau \\in \\mathcal{T}_n$.\nUsing the fact that the value of a tour depends on the sum $q_{ijk\\ell} + q_{k\\ell ij}$ and not the individual values $q_{ijk\\ell}$ and $q_{k\\ell ij}$, we obtain the following lemma.\n\\begin{lemma}\n \\label{LemmaInvariantTransformations}\n Let $Q \\in \\mathbb{M}^{n(n-1)}$.\n Then the following statements are equivalent.\n \\begin{enumerate}\n \\item\n $Q$ is linearizable.\n \\item\n $\\transpose{Q}$ is linearizable.\n \\item\n $\\frac{1}{2}\\left(Q+\\transpose{Q} \\right)$ is linearizable.\n \\item\n $Q+D+S$ is linearizable, where $D$ is any diagonal matrix and $S$ is any skew-symmetric matrix of the same dimension as $Q$.\n \\item\n Any matrix equivalent to $Q$ modulo a linear matrix is linearizable.\n \\end{enumerate}\n\\end{lemma}\n\nRecall that a matrix $C \\in \\mathbb{M}^{n}$ is a \\emph{linearization} of $Q \\in \\mathbb{M}^{n(n-1)}$ if $Q[\\tau] = C(\\tau)$ for all $\\tau \\in \\mathcal{T}_n$.\nIn this case we say that $Q$ is a \\emph{quadratic form} of $C$ and that the matrix $Q$ is \\emph{linearizable}.\nNote that the sum of two linearizable matrices in $\\mathbb{M}^{n(n-1)}$ is linearizable and the scalar multiple of a linearizable matrix is linearizable.\nHence, the collection of all linearizable matrices in $\\mathbb{M}^{n(n-1)}$ forms a linear subspace of $\\mathbb{M}^{n(n-1)}$.\n\nThe coefficients $q_{iji\\ell}$ with $j \\neq \\ell$, the coefficients $q_{ijkj}$ with $i \\neq k$ and the coefficients $q_{ijji}$ can never contribute a non-zero value to the objective function $Q[\\tau]$ of a tour $\\tau$.\nThese elements of the matrix $Q$ they play no role in the optimization problem, therefore, we assume subsequently that $q_{ijk\\ell} = 0$ for all indices $i,j,k$ and $\\ell$ that satisfy $i = k$ and $j \\neq \\ell$ or $i \\neq k$ and $j = \\ell$ or $i = \\ell$ and $j = k$.\n\nTo characterize the linearizability of $Q$, working with the original matrix $Q$ seems tedious because of several reasons.\nFor example, as Lemma~\\ref{LemmaInvariantTransformations} indicates, we can add any skew-symmetric matrix to $Q$, maintaining linearizability.\nIt is important to restrict $Q$ in an appropriate way to bring some kind of uniqueness while the generality is maintained.\nFor this purpose, we say that a matrix $Q \\in \\mathbb{M}^{n(n-1)}$ is in \\emph{quadratic reduced form} if all of the following conditions are satisfied:\n\n\\begin{enumerate}\n\\item\n $Q$ is symmetric.\n\\item\n The diagonal elements of $Q$ are zeros.\n\\item\n All elements of $Q$ with rows and columns indexed by $\\{ (n,p),(p,n) : p\\in N_n\\}$ are zeros.\n\\end{enumerate}\n\nLet us now prove a useful decomposition theorem.\n\n\\begin{theorem}\n \\label{TheoremReducedFormDecomposition}\n Any matrix $Q \\in \\mathbb{M}^{n(n-1)}$ can be decomposed in $\\orderO{n^4}$ time into a pair $(Q^R, L)$ of matrices such that $Q^R \\in \\mathbb{M}^{n(n-1)}$ is in quadratic reduced form, $L \\in \\mathbb{M}^n$ and $Q[\\tau] = Q^R[\\tau] + L(\\tau)$ for all $\\tau \\in \\mathcal{T}_n$.\n\\end{theorem}\n\n\\begin{proof}\n Let $\\tilde{Q}$ and $\\tilde{L}$ be defined as\n \\[\n \\tilde{q}_{ijkl} =\n \\begin{cases}\n q_{ijk\\ell} - q_{ijkn} - q_{ijn\\ell} - q_{ink\\ell} + q_{inkn} + q_{inn\\ell} - q_{njk\\ell} + q_{njkn} +q_{njn\\ell} & \\text{ if } n \\notin \\{i,j,k,\\ell\\} \\\\\n 0 & \\text{ otherwise}\n \\end{cases}\n \\]\n and\n \\[\n \\tilde{l}_{ij} =\n \\begin{cases}\n \\sum \\limits_{k=1}^{n-1}(q_{ijkn} + q_{ijnk} + q_{knij} - q_{knin} - q_{knnj} + q_{nkij} - q_{nkin} - q_{nknj}) & \\text{ if } i \\neq j \\\\\n 0 & \\text{ otherwise.}\n \\end{cases}\n \\]\n It can be verified that all entries of rows and columns of $\\tilde{Q}$ indexed by $(n,p)$ or $(p,n)$ for $p \\in N_n$ are zeros.\n Further, with some algebra, it can be verified that $\\tilde{Q}$ is equivalent to $Q$ modulo the linear matrix $\\tilde{L}$.\n Let $D \\in \\mathbb{M}^{n(n-1)}$ be the diagonal matrix with $d_{ijij} = \\tilde{q}_{ijij}$.\n Define $Q^R= \\frac{1}{2}(\\tilde{Q}+\\transpose{\\tilde{Q}}) - D$ and choose $L \\in \\mathbb{M}^n$ such that $l_{ij} = \\tilde{l}_{ij} + d_{ijij}$.\n From the discussion above and Lemma~\\ref{LemmaInvariantTransformations} we have $Q[\\tau] = Q^R[\\tau] + L(\\tau)$ for all $\\tau \\in \\mathcal{T}_n $ and that $Q^R$ is in quadratic reduced form.\n Finally, it is easy to see that only $\\orderO{n^4}$ arithmetic operations are required to compute $Q^R$ and $L$.\n\\end{proof}\n\nThe matrix $Q^R$ constructed in the proof of Theorem~\\ref{TheoremReducedFormDecomposition} is referred to as the \\emph{quadratic reduced form of $Q$}.\nThe decomposition $(Q^R,L)$ of $Q$ is referred to as \\emph{quadratic reduced form decomposition} or simply \\emph{QRF decomposition}.\nThe proof of Theorem~\\ref{TheoremReducedFormDecomposition} provides one way of constructing a QRF decomposition of $Q$.\nIt may be noted that equivalent transformations discussed in~\\cite{Burkard07,KabadiP11} in the context of the quadratic assignment problem can be modified appropriately to get another method for constructing a QRF decomposition.\n\n\\begin{theorem}\n Let $Q \\in \\mathbb{M}^{n(n-1)}$ and $(Q^R,L)$ be a QRF decomposition of $Q$.\n Then $Q$ is linearizable if and only if $Q^R$ is linearizable.\n Further, if $P \\in \\mathbb{M}^n$ is a linearization of $Q^R$, then $P + L$ is a linearization of $Q$.\n\\end{theorem}\n\n\\begin{proof}\n Since $(Q^R,L)$ is a QRF decomposition of $Q$, $Q[\\tau] = Q^R[\\tau] + L(\\tau)$ holds for all $\\tau \\in \\mathcal{T}_n$.\n By Lemma~\\ref{LemmaInvariantTransformations}, $Q$ is linearizable if and only if $Q^R$ linearizable.\n If $P$ is a linearization of $Q^R$, it follows that $Q[\\tau] = (P + L)(\\tau)$ for all $\\tau \\in \\mathcal{T}_n$.\n\\end{proof}\n\nThus, to study the linearization problem of $Q$, it is sufficient to study the linearization problem of the quadratic reduced form $Q^R$ of $Q$.\nThe lemma below is a variation of the corresponding result proved in~\\cite{KabadiP11} for the quadratic assignment problem.\nSince it is used in our main theorem, a complete proof within the context of QTSP is given below.\n\\begin{lemma}\n \\label{LemmaLinearizationWithZeros}\n Suppose $Q^R \\in \\mathbb{M}^{n(n-1)}$ is linearizable and is in quadratic reduced form.\n Then there exists a linearization $C$ of $Q^R$ such that $c_{in} = 0$ and $c_{ni} = 0$ for all $i \\in N_n$.\n\\end{lemma}\n\n\\begin{proof}\n Suppose $C'$ is some linearization of $Q^R$.\n Let $\\alpha := \\sum_{i\\in N_n}c'_{in} \/ (n-2)$ and $\\beta := \\sum_{i\\in N_n}c'_{ni}\/(n-2)$.\n Define $a_i := -c'_{in} + \\beta$, $b_i := -c'_{ni} + \\alpha$ for all $i \\in N_n$ as well as $a_n := -\\alpha$ and $b_n := -\\beta$.\n Moreover, let $c_{ij} := c'_{ij} + a_i + b_j$ for all $i,j \\in N$.\n Then $\\sum_{i\\in N}(a_i+b_i) = 0$ and hence $C(\\tau) = C^{\\prime}(\\tau)$ for all $\\tau \\in \\mathcal{T}_n$.\n Now it can be verified that $c_{in} = c_{ni} = 0$ for all $i \\in N_n$, which concludes the proof.\n\\end{proof}\n\nLet $\\bar{Q}$ be the principal submatrix obtained from $Q^R$ by deleting rows and columns indexed by the elements of the set $\\{ (n,p),(p,n) : p\\in N_n\\}$.\nNote that $\\bar{Q} \\in \\mathbb{M}^{(n-1)(n-2)}$.\nFor any $i,j \\in N_n$ with $i \\neq j$, define $Z^{ij} \\in \\mathbb{M}^{n-1}$ such that\n\\begin{equation*}\n z^{ij}_{uv} =\n \\begin{cases}\n 0 & \\text{ if } (i,j) = (u,v), \\\\\n \\bar{q}_{ijuv} & \\text{if } (i,j) \\neq (u,v).\n \\end{cases}\n\\end{equation*}\nOur next theorem characterizes the linearizability of a matrix in quadratic reduced form.\nNote that, for $n \\leq 3$, any quadratic cost matrix is linearizable.\nSo, we restrict our attention to the case when $n \\geq 4$.\n\n\\begin{theorem}\n \\label{TheoremCharacterization}\n Let $Q^R \\in \\mathbb{M}^{n(n-1)}$ be in quadratic reduced form and $n \\geq 4$.\n Then $Q^R$ is linearizable if and only if there exists a matrix $F \\in \\mathbb{M}^{n-1}$ (with elements $f_{ij}$) such that\n \\begin{enumerate}\n \\item\n \\label{TheoremCharacterizationFirst}\n $Z^{ij}(\\tau) - Z^{k\\ell}(\\tau) = \\frac{1}{2} \\left(f_{ij} - f_{kl}\\right )$ holds for every $\\tau \\in \\mathcal{T}_{n-1}$ and for all $(i,j), (k,\\ell) \\in \\tau $, and\n \\item\n \\label{TheoremCharacterizationSecond}\n $\\frac{n-2}{n-3}F$ is a linearization of $\\bar{Q}\\in \\mathbb{M}^{(n-1)(n-2)}$.\n \\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}\nSuppose $Q^R$ is linearizable.\nLet $C$ be a linearization of $Q^R$ satisfying the condition of Lemma~\\ref{LemmaLinearizationWithZeros} and let $C'$ be the submatrix obtained from $C$ by deleting its $n$'th row and column.\nDefine $F$ via $f_{ij} := c_{ij}$ for all $i,j \\in N_n$ with $i\\neq j$.\nWe will show that $F$ satisfies the conditions of the theorem.\nConsider any $\\tau \\in \\mathcal{T}_{n-1}$.\nFor any $(i,j) \\in \\tau$, we define the tour $\\tau^{ij} \\in \\mathcal{T}_n$ obtained by deleting arc $(i,j)$ from $\\tau$ and introducing arcs $(i,n)$ and $(n,j)$ (See Figure~\\ref{FigureTours}).\n\n\\begin{figure}[H]\n \\centering\n \\begin{tikzpicture}[>=stealth, decoration={markings, mark=at position 0.5 with {\\arrow{>}}}]\n\t\\node [vrtx] (v1) at (-3, 1.5){};\n\t\\node [vrtx] (v2) at (-1.8273, 0.9352) {};\n \\node [vrtx] (v3) at (-1.5376, -0.3338) {};\n \\node [vrtx, label=below right:{\\footnotesize$j$}] (v4) at (-2.3492, -1.3515) {};\n \\node [vrtx, label=below left:{\\footnotesize$i$}] (v5) at (-3.6508, -1.3515) {};\n \\node [vrtx] (v6) at (-4.4624, -0.3338) {};\n \\node [vrtx] (v7) at (-4.1727, 0.9352) {};\n\t\\draw[postaction={decorate}] (v5) -- (v4);\n\t\\draw[postaction={decorate}] (v6) -- (v5);\n\t\\draw[postaction={decorate}] (v7) -- (v6);\n\t\\draw[postaction={decorate}] (v1) -- (v7);\n\t\\draw[postaction={decorate}] (v2) -- (v1);\n\t\\draw[postaction={decorate}] (v3) -- (v2);\n\t\\draw[postaction={decorate}] (v4) -- (v3);\n\n\t\\node [vrtx] (u1) at (3, 1.5){};\n\t\\node [vrtx] (u2) at (4.1727, 0.9352) {};\n \\node [vrtx] (u3) at (4.4624, -0.3338) {};\n \\node [vrtx, label=below right:{\\footnotesize$j$}] (u4) at (3.6508, -1.3515) {};\n \\node [vrtx, label=below left:{\\footnotesize$i$}] (u5) at (2.3492, -1.3515) {};\n \\node [vrtx] (u6) at (1.5376, -0.3338) {};\n \\node [vrtx] (u7) at (1.8273, 0.9352) {};\n \\node [vrtx, label=above:{\\footnotesize$n$}] (n) at (3, -0.5) {};\n\t\\draw[postaction={decorate}] (u5) -- (n);\n\t\\draw[postaction={decorate}] (u6) -- (u5);\n\t\\draw[postaction={decorate}] (u7) -- (u6);\n\t\\draw[postaction={decorate}] (u1) -- (u7);\n\t\\draw[postaction={decorate}] (u2) -- (u1);\n\t\\draw[postaction={decorate}] (u3) -- (u2);\n\t\\draw[postaction={decorate}] (u4) -- (u3);\n\t\\draw[postaction={decorate}] (n)-- (u4);\n\t\\draw[dashed, postaction={decorate}] (u5) -- (u4);\n \\end{tikzpicture}\n \\caption{Tours $\\tau \\in \\mathcal{T}_{n-1}$ (shown on the left) and $\\tau^{ij}\\in \\mathcal{T}_n$ (shown on the right) with $n = 8$.}\n \\label{FigureTours}\n\\end{figure}\n\nNote that $Q^R[\\tau^{ij}] = \\bar{Q}[\\tau] - 2Z^{ij}(\\tau)$ and $C(\\tau^{ij}) = C^{\\prime}(\\tau) - c_{ij}$.\nBut $Q^R[\\tau_{ij}] = C(\\tau_{ij})$, and we obtain\n\\begin{equation}\n \\label{TheoremCharacterizationZ1}\n Z^{ij}(\\tau) = \\frac{1}{2} \\left( \\bar{Q}[\\tau] - C^{\\prime}(\\tau) +c_{ij}\\right).\n\\end{equation}\nSimilarly, for $(k,\\ell) \\in \\tau$,\n\\begin{equation}\n \\label{TheoremCharacterizationZ2}\n Z^{k\\ell}(\\tau) = \\frac{1}{2} \\left( \\bar{Q}[\\tau] - C^{\\prime}(\\tau) +c_{k\\ell}\\right).\n\\end{equation}\nSubtracting~\\eqref{TheoremCharacterizationZ2} from~\\eqref{TheoremCharacterizationZ1} yields\n\\[\n Z^{ij}(\\tau) - Z^{k\\ell}(\\tau) = \\frac{1}{2} \\left(f_{ij} - f_{kl}\\right),\n\\]\ni.e., condition~\\eqref{TheoremCharacterizationFirst} of the theorem is satisfied.\nNow, adding~\\eqref{TheoremCharacterizationZ1} for all $(i,j)\\in \\tau$, we obtain\n\\begin{align*}\n \\bar{Q}[\\tau] & = \\sum_{(i,j)\\in \\tau}Z^{ij}(\\tau) =\n \\sum_{(i,j)\\in \\tau} \\frac{1}{2} \\left( \\bar{Q}[\\tau] - C^{\\prime}(\\tau) +c_{ij}\\right)\n = \\frac{1}{2} \\left[(n-1)\\bar{Q}[\\tau] - (n-2)C^{\\prime}(\\tau)\\right],\n\\end{align*}\nwhich implies $\\bar{Q}[\\tau] = \\frac{n-2}{n-3}C^{\\prime}(\\tau)$.\nThus, $\\bar{Q}$ is tour-linearizable and $\\frac{n-2}{n-3}C^{\\prime}(\\tau)$ is a linearization.\n\n\\medskip\n\nConversely, suppose $Q^R$ satisfies conditions~\\eqref{TheoremCharacterizationFirst} and~\\eqref{TheoremCharacterizationSecond} of the theorem.\nWe will show that $Q^R$ is tour-linearizable.\nTo this end, define the matrix $C \\in \\mathbb{M}^n$ via\n\\begin{equation}\n \\label{TheoremCharacterizationC}\n c_{ij} :=\n \\begin{cases}\n f_{ij} & \\text{ if } i,j \\in N_n, \\\\\n 0 & \\text{ otherwise}.\n \\end{cases}\n\\end{equation}\nLet $C^{\\prime}$ be the principal submatrix obtained from $C$ by deleting its $n$'th row and column. From condition~\\eqref{TheoremCharacterizationFirst} of the theorem, for every tour $\\tau \\in \\mathcal{T}_{n-1}$ and every pair $(i,j),(k,\\ell) \\in \\tau$ of edges in the tour,\n\\begin{equation}\n \\label{TheoremCharacterizationPhi}\n 2Z^{ij}(\\tau) = c_{ij} + \\phi(\\tau)\n\\end{equation}\nholds, where $\\phi(\\tau) = 2Z^{k\\ell}(\\tau) - c_{k\\ell}$.\nFrom condition~\\eqref{TheoremCharacterizationSecond}, for every $\\tau \\in \\mathcal{T}_{n-1}$, $\\bar{Q}[\\tau] = \\frac{n-2}{n-3}C^{\\prime}(\\tau)$, which implies\n\\begin{align}\n 0 &= (n-1)\\bar{Q}[\\tau] - 2\\bar{Q}[\\tau] - (n-2)C^{\\prime}(\\tau) \\nonumber \\\\\n &= (n-1)\\bar{Q}[\\tau] - \\sum_{(i,j)\\in \\tau}2Z^{ij}(\\tau) - (n-2)C^{\\prime}(\\tau). \\label{TheoremCharacterizationInsertZ}\n\\end{align}\nSubstituting equation~\\eqref{TheoremCharacterizationPhi} in~\\eqref{TheoremCharacterizationInsertZ} yields\n\\begin{align}\n 0 &= (n-1)\\bar{Q}[\\tau] - \\sum_{(i,j)\\in \\tau}\\left[c_{ij}+\\phi(\\tau) \\right]- (n-2)C^{\\prime}(\\tau) \\nonumber \\\\\n &= (n-1)\\bar{Q}[\\tau] - C^{\\prime}(\\tau) - (n-1)\\phi(\\tau)- (n-2)C^{\\prime}(\\tau), \\nonumber\n\\end{align}\nwhich implies\n\\begin{equation}\n \\phi(\\tau) = \\bar{Q}[\\tau] - C^{\\prime}(\\tau) \\label{TheoremCharacterizationPhiNew}\n\\end{equation}\nSubstituting equation~\\eqref{TheoremCharacterizationPhiNew} in~\\eqref{TheoremCharacterizationPhi}, we have, for every $(i,j) \\in \\tau$,\n\\begin{equation}\n 2Z^{ij}(\\tau) = c_{ij} + \\bar{Q}[\\tau] - C^{\\prime}(\\tau). \\label{TheoremCharacterizationZNew}\n\\end{equation}\nConsider any tour $\\tau^{ij} \\in \\mathcal{T}_n$, where $i$ and $j$ are the predecessor and successor of $n$ in $\\tau^{ij}$, respectively.\nLet $\\tau^0$ be the tour in $\\mathcal{T}_{n-1}$ obtained from $\\tau^{ij}$ by shortcutting the path $i \\rightarrow n \\rightarrow j$ using arc $(i,j)$.\nThen,\n\\begin{align*}\n Q^R[\\tau^{ij}] &= \\bar{Q}[\\tau^0] - 2Z^{ij}(\\tau^0) \\\\\n &= \\bar{Q}[\\tau^0] -\\left[c_{ij} + \\bar{Q}[\\tau^0] -C^{\\prime}(\\tau^0)\\right] \\\\\n &= C^{\\prime}(\\tau^0) - c_{ij} = C(\\tau^{ij}),\n\\end{align*}\nestablishing that $Q^R$ is linearizable with $C$ as a linearization.\n\\end{proof}\n\n\\begin{theorem}\n The QTSP linearization problem can be solved in $\\orderO{n^5}$ time.\n Moreover, a linearization of $Q$ can be constructed in $\\orderO{n^5}$ time whenever it exists.\n\\end{theorem}\n\n\\begin{proof}\n Given $Q$, its QRF decomposition $(Q^R,L)$ can be obtained in $\\orderO{n^4}$ time using the construction from Theorem~\\ref{TheoremReducedFormDecomposition}.\n We now test the linearizability characterization of $Q^R$ given in Theorem~\\ref{TheoremCharacterization}.\n Let $\\Delta := \\{[(i,j),(k,\\ell)] : i,j,k,\\ell \\in N_n, k\\neq i, j\\neq \\ell, (i,j)\\neq (j,i), i\\neq j, k \\neq \\ell\\}$.\n To verify condition~\\eqref{TheoremCharacterizationFirst} of Theorem~\\ref{TheoremCharacterization} and construct a linearization, if exists, we have to find $f_{ij}, (i,j) \\in N_n, i \\neq j$, such that for any $\\tau \\in \\mathcal{T}_n,$ $Z^{ij}(\\tau) - Z^{k\\ell}(\\tau) = \\frac{1}{2} \\left(f_{ij} - f_{kl}\\right )$ for all $(i,j), (k,\\ell) \\in \\tau$.\n This can in principle be achieved by first testing if the matrix $P^{ijk\\ell} := Z^{ij}-Z^{kl}$ satisfies the CVP.\n If it does not satisfy the CVP for some $[(i,j),(k,\\ell)] \\in \\Delta$, then condition~\\eqref{TheoremCharacterizationFirst} of Theorem~\\ref{TheoremCharacterization} is not satisfied.\n If the CVP is satisfied for all $[(i,j),(k,\\ell)] \\in \\Delta$, let $\\eta_{ijk\\ell}$ be the constant value of tours obtained for the matrix $P^{ijk\\ell}$.\n Now we need to solve the system\n \\begin{equation}\n \\label{EquationAlgorithmSystem}\n f_{ij} - f_{k\\ell} = 2\\eta_{ijk\\ell} \\text{ for all } [(i,j),(k,\\ell)] \\in \\Delta.\n \\end{equation}\n If the system is inconsistent, condition~\\eqref{TheoremCharacterizationFirst} of Theorem~\\ref{TheoremCharacterization} is not satisfied.\n Using the characterization of TSP with CVP~\\cite{Berenguer79,Gabovich76,KabadiP03,KabadiP06}, each of the $\\orderO{n^4}$-many $\\eta_{ijk\\ell}$-values can be obtained in $\\orderO{n^2}$ time.\n\n Since this would lead to a running time of $\\orderO{n^6}$ just to compute the right-hand side of the system, we will now improve this part of the algorithm.\n To this end, consider the graph $\\tilde{H} = (E,\\tilde{\\Delta})$, where\n $\\tilde{\\Delta} := \\{ \\{(i,j),(k,\\ell) \\in E \\times E : [(i,j),(k,\\ell)] \\in \\Delta \\}$,\n i.e., two edges $(i,j)$ and $(k,\\ell)$ are adjacent in $\\tilde{H}$ if and only if both arcs appear in at least one tour.\n It is easy to check that $\\tilde{H}$ is connected and hence contains a spanning tree $\\tilde{T}$.\n Moreover, by exploring the nodes from some arbitrary root node, one finds a sequence of $|E|-1$ rows of~\\eqref{EquationAlgorithmSystem},\n each of which contains an $f$-variable that was not present in the previous rows.\n Since the first row contains two $f$-variables, this submatrix has rank at least $|E|-1$.\n\n We can find in $\\orderO{n^2}$ time a spanning tree of $\\tilde{H}$ and compute the corresponding $(|E|-1)$-many $\\eta$-values in $\\orderO{n^2 \\cdot |E|} = \\orderO{n^4}$ time.\n Such $\\eta$-values may not exist, and in this case condition~\\ref{TheoremCharacterizationFirst} of Theorem~\\ref{TheoremCharacterization} is not satisfied.\n If these $\\eta$-values exist, we proceed as follows.\n Since every row of this system contains exactly two variables, it can be solved in time $\\orderO{n^2}$ using the algorithm of Aspvall and Shiloach~\\cite{AspvallS80}.\n Note that due to the selection of the rows the system will always be consistent, that is, we will compute a candidate solution $\\tilde{F}$.\n Moreover, the system is under-determined, since $\\tilde{F} + \\lambda \\onevec$ is also a solution for every $\\lambda \\in \\QQ$, where $\\onevec$ denotes the matrix having all entries equal to $1$.\n This shows that system~\\eqref{EquationAlgorithmSystem} has in fact rank $|E|-1$.\n Notice also that two rows of~\\eqref{EquationAlgorithmSystem} corresponding to $[(i,j),(k,\\ell)] \\in \\Delta$ and to $[(k,\\ell),(i,j)]$ are negatives of each other.\n Hence, it does not matter which one we actually include in the subsystem.\n\n We claim that our candidate solution $\\tilde{F}$ satisfies all (remaining) rows of~\\eqref{EquationAlgorithmSystem}.\n To this end, consider a pair $[(i,j),(k,\\ell)] \\in \\Delta$ of arcs.\n Let $(i_t,j_t)$ for $t = 0,1,\\dotsc,m$ be the (ordered) nodes in the unique $(k,\\ell)$-$(i,j)$-path in the spanning tree $\\tilde{T}$.\n In particular, $(i_0,j_0) = (k,\\ell)$ and $(i_m,j_m) = (i,j)$.\n By construction, $\\tilde{F}$ satisfies\n \\begin{align*}\n \\tilde{f}_{i_tj_t} - \\tilde{f}_{i_{t-1}j_{t-1}} &= 2\\eta_{i_tj_ti_{t-1}j_{t-1}}\n \\end{align*}\n for $t=1,2,\\dotsc,m$. The sum of the equations yields\n \\begin{align*}\n \\tilde{f}_{ij} - \\tilde{f}_{k\\ell} = \\tilde{f}_{i_mj_m} - \\tilde{f}_{i_0j_0} &= 2 \\sum_{t=1}^m \\eta_{i_tj_ti_{t-1}j_{t-1}}.\n \\end{align*}\n Similarly, for all tours $\\tau \\in \\mathcal{T}_{n-1}$ and for $t = 1,2,\\dotsc,m$, we have\n \\begin{align*}\n Z^{i_tj_t}(\\tau) - Z^{i_{t-1}j_{t-1}}(\\tau) &= \\eta_{i_tj_ti_{t-1}j_{t-1}}.\n \\end{align*}\n Again, the sum yields\n \\begin{align*}\n Z^{ij}(\\tau) - Z^{k\\ell}(\\tau) &= \\sum_{t=1}^m \\eta_{i_tj_ti_{t-1}j_{t-1}},\n \\end{align*}\n which establishes that the CVP is satisfied for $P^{ijk\\ell}$ and that $\\eta_{ijk\\ell} = \\sum_{t=1}^m \\eta_{i_tj_ti_{t-1}j_{t-1}}$.\n Hence, $\\tilde{F}$ satisfies the row of~\\eqref{EquationAlgorithmSystem} corresponding to $[(i,j),(k,\\ell)]$, which shows that this row is redundant.\n\n The parameter $\\lambda$ can be easily determined by enforcing $Q^R[\\tau] = (\\tilde{F} + \\lambda \\onevec)(\\tau)$ for an arbitrary tour $\\tau$.\n Since $\\onevec(\\tau) = n$, we can set $F := \\tilde{F} + \\lambda \\onevec$ for $\\lambda := \\tfrac{1}{n}(Q^R[\\tau] - \\tilde{F}(\\tau))$.\n\n To summarize, the overall complexity of verifying condition~\\eqref{TheoremCharacterizationFirst} of Theorem~\\ref{TheoremCharacterization} and computing a candidate solution $F$ is $\\orderO{n^4}$.\n\n If condition~\\eqref{TheoremCharacterizationFirst} is satisfied, then we need to verify condition~\\eqref{TheoremCharacterizationSecond}.\n This is achieved by testing if $\\bar{Q}$ is linearizable and if yes, by testing if $\\frac{n-2}{n-3}F$ is one of the linearizations.\n If $\\bar{Q}$ is not linearizable, then condition~\\eqref{TheoremCharacterizationSecond} fails.\n Suppose $H$ is a linearization of $\\bar{Q}$.\n Then, to verify if $\\frac{n-2}{n-3}F$ is another linearization, it is enough to verify if the matrix $H - \\frac{n-2}{n-3}F$ satisfies the CVP with the constant tour value equal to zero.\n Given $H$, this can be achieved in $\\orderO{n^2}$ time.\n\n Thus, the problem of testing linearizability of $Q$ reduces to that of testing linearizability of $\\bar{Q}$, which has the size parameter $n$ reduced by 1 along with an additive $\\orderO{n^4}$ effort.\n In the base case $n = 3$, $Q^R$ is trivially linearizable since there only exist two opposite tours that do not share edges.\n Thus, if $g(n)$ is the complexity of testing linearizability of $Q$, we have $g(n) = g(n-1) + \\orderO{n^4}$, establishing $g(n) = \\orderO{n^5}$.\n\n To obtain the linearization, construct the matrix $C$ from $F$ according to~\\eqref{TheoremCharacterizationC}.\n Then $C + L$ is the required linearization of $Q$, where $L$ is provided by the QRF decomposition $(Q^R, L)$ of $Q$.\n\\end{proof}\n\n\\section{Simple sufficient conditions and extensions}\n\nThe complexity of testing linearizability of $Q$, although almost linear, imposes limits on applicability of the result outside the theoretical realm.\nIn this section we present some sufficient conditions for linearizability that can be verified more easily.\n\nThe cost matrix $C\\in \\mathbb{M}^n$ associated with a linear TSP on a complete directed graph on $n$ nodes satisfies the \\emph{$(k,\\ell)$-constant value property} ($(k,\\ell)$-CVP) if all tours containing the edge $(k,\\ell)$ have the same cost.\n\\begin{lemma}\n A cost matrix $C \\in \\mathbb{M}^n$ satisfies the $(k,\\ell)$-CVP if and only if there exist $a_i \\in N_k$ and $b_i\\in N_{\\ell}$ such that $c_{ij} = a_i + b_j$ for all $i,j \\in N$ with $i\\neq j$, $i \\neq k$ and $j \\neq \\ell$.\n Moreover, the constant value of the tours is given by $c_{k\\ell} + \\sum_{i\\in N_k} a_i + \\sum_{i \\in N_{\\ell}} b_i$.\n\\end{lemma}\nThe proof of this lemma can be obtained by making use of the characterization of matrices satisfying the CVP on complete digraphs.\nFor each row indexed by $(i,j)$ of $Q$, define $R^{ij}\\in \\mathbb{M}^n$ by\n\\begin{equation}\n r^{ij}_{uv} :=\n \\begin{cases}\n q_{ijuv} & \\text{if } u \\neq v \\\\\n 0& \\text{ otherwise}.\n \\end{cases}\n\\end{equation}\n\n\\begin{lemma}\n If $R^{ij}$ satisfies the $(i,j)$-CVP for all $(i,j) \\in N \\times N$ with $i\\neq j$, then $Q$ is linearizable.\n Moreover, if $L \\in \\mathbb{M}^n$ is a linearization, then $l_{ij}$ is equal to the $(i,j)$-CVP value of tours.\n\\end{lemma}\n\n\\begin{proof}\n For any $\\tau \\in \\mathcal{T}_n$, $Q[\\tau] = \\sum_{(i,j)\\in \\tau}R^{ij}(\\tau) = \\sum_{(i,j)\\in\\tau}l_{ij} = L(\\tau)$.\n\\end{proof}\nA corresponding result can be derived by considering columns of $Q$.\nAnother simple sufficient condition for linearizability is that $Q$ is a weak-sum matrix.\nA more general version of this condition is provided below.\n\\begin{lemma}\n If there exist $A,B,D,H \\in \\mathbb{R}^{n\\times n\\times n}$ such that $q_{ijkl} = a_{ijk} + b_{ijl} + d_{ikl} + h_{jkl}$ for all $i,j,k,l$ with $i\\neq j$ and $k\\neq l$ then $Q$ is linearizable.\n\\end{lemma}\n\\begin{proof}\n Let $\\alpha_{ij} := \\sum \\limits_{k=1}^n a_{ijk}$, $\\beta_{ij} := \\sum \\limits_{k=1}^n b_{ijk}$, $\\gamma_{ij} := \\sum \\limits_{k=1}^n d_{kij}$, $\\delta_{ij} := \\sum \\limits_{k=1}^n h_{kij}$ and $c_{ij} := \\alpha_{ij} + \\beta_{ij} + \\gamma_{ij} + \\delta_{ij}$ for all $i,j$ with $i\\neq j$.\n Also, let $x_{ij}$ be the binary decision variable which takes value $1$ if and only if the underlying tour contains arc $(i,j)$.\n Thus, the values of $x_{ij}$ completely define a tour.\n Now,\n\\begin{align*}\n Q[\\tau]\n &= \\sum \\limits_{i=1}^n \\sum \\limits_{j=1,j\\neq i}^n \\sum \\limits_{k=1}^n \\sum \\limits_{l=1,k\\neq l}^n q_{ijkl}x_{ij}x_{kl} \\\\\n &= \\sum \\limits_{i=1}^n \\sum \\limits_{j=1,j\\neq i}^n \\sum \\limits_{k=1}^n \\sum \\limits_{l=1,k\\neq l}^n (a_{ijk}+b_{ijl}+d_{ikl}+h_{jkl})x_{ij}x_{kl} \\\\\n &= \\sum \\limits_{i=1}^n \\sum \\limits_{j=1,j\\neq i}^n x_{ij} \\sum \\limits_{k=1}^n a_{ijk} \\sum \\limits_{l=1,k\\neq l}^n x_{kl} + \\sum \\limits_{k=1}^n \\sum \\limits_{l=1,k\\neq l}^n x_{kl} \\sum \\limits_{i=1}^n b_{ikl} \\sum \\limits_{j=1,i\\neq j}^n x_{ij} \\\\\n &\\qquad+ \\sum \\limits_{i=1}^n \\sum \\limits_{j=1,j\\neq i}^n x_{ij} \\sum \\limits_{l=1}^n d_{ijl} \\sum \\limits_{k=1,k\\neq l}^n x_{kl} + \\sum \\limits_{k=1}^n \\sum \\limits_{l=1,k\\neq l}^n x_{kl} \\sum \\limits_{j=1}^n h_{jkl} \\sum \\limits_{i=1,i\\neq j}^n x_{ij} \\\\\n &=\\sum \\limits_{i=1}^n \\sum \\limits_{j=1, i\\neq j}^n ( \\alpha_{ij} + \\beta_{ij} + \\gamma_{ij} + \\delta_{ij}) x_{ij}\n = \\sum \\limits_{i=1}^n \\sum \\limits_{j=1, j\\neq i}^n c_{ij}x_{ij}\n = C(\\tau),\n\\end{align*}\nwhich concludes the proof.\n\\end{proof}\n\nThe result in the previous lemma can be strengthened further by excluding the conditions of the lemma for some appropriate combinations of $i,j,k$ and $\\ell$.\n\nSo far we have been considering complete directed graphs.\nCorresponding results can be derived easily to solve the linearization problem for QTSP on a complete undirected graph $K_n$.\n\nAn interesting related question is to explore the linearization problem of QTSP on other meaningful classes of graphs.\nThe answer will very much depend on the structure of the graph.\nFor example, if the underlying graph is a wheel, any quadratic cost matrix $Q$ is linearizable.\nNote that a wheel on $n$ nodes contains $n-1$ tours and we can write a system of $n-1$ linear equations, where each equation corresponding to tour, and the variables are elements of a 'trial' linearization.\nIt can be shown that this system is always consistent.\nWe leave the question of characterizing linearizable quadratic cost matrices of specially structured graphs as a topic for further research.\n\n\\subsection*{Acknowledgement}\nThis project was initiated jointly with Santosh Kabadi, who passed away.\nWe acknowledge the significant contributions Santosh made in obtaining the results of this paper.\nThe work is supported by an NSERC discovery grant and an NSERC discovery accelerator supplement awarded to Abraham P.\\ Punnen.\n\n\\bibliographystyle{plain}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{I Introduction}\n\nThe quadratic Mandelbrot set is the set of non divergent points\nin the iteration $z_{n+1}:=z_n+c$,~$z_0:=(0,0)$,~$\\forall\nc$, over the binary numbers, with the same norm as in the\ncomplex case. Recently a convergence proof for the quadratic\nMandelbrot set was given \\cite{Met}. This set was discussed\nnumerically in \\cite{Sen}. The puzzling effect, that by changing\nonly one sign in the iteration formula results in a completely\ndifferent not even chaotic pictures, was expressed by the term\n''perplex numbers``, which are nothing but binary numbers\n\\cite{Maj,Fje}. Where the disappearance of the chaotic behavior\nwas expressed by ''The mystery of the quadratic Mandelbrot set``\nin \\cite{Met}.\n\nIn this note, we want to show how the two cases can be\nunderstood in a Clifford algebraic framework. This provide us\nwith several advantages over the usual picture.\n\nFirst we introduce the complex numbers as operator spinors\nacting on a unit reference vector in the Euclidean vector space\n{\\bf E(2)}. This is in analogy with Kustaanheimo, Stiefel\n\\cite{Kus}, Lounesto \\cite{Lou-mech} and the inspiring chapter 8\n''Spinor Mechanics`` of Hestenes' ''New Foundation of classical\nMechanics`` \\cite{Hes-mech}.\n\nIn a second step we utilize Clifford algebras with arbitrary\nnot necessarily symmetric or antisymmetric bilinear forms. Such\nalgebras where discussed by Chevalley \\cite{Che}, Riesz\n\\cite{Rie}, Oziewicz \\cite{Ozi} and Lounesto \\cite{Lou-bili} in\na mathematical and geometrical manner.\n\nThis procedure is a general tool and was used in \\cite{Fau} in\nan entire other context. Here the geometrical point of view\nallows a generalization of the concept of operator spinors to\nthose vector spaces, which are equipped with general non\ndegenerate bilinear forms.\n\nThe third step stems from the observation, that the Clifford\nalgebra is strongly connected with a quadratic algebra. This\nstructure provides us with several existence and uniqueness\ntheorems. Separability provides us with a unique (pseudo)\nnormform, as with a unique trace \\cite{Hah}.\n\nSurprisingly we are left with a norm different from the one used\nin \\cite{Met}. But here the (pseudo) normform is compatible with\nthe algebraic structure and the geometrical meaning. These\n(pseudo) normforms and traces are therefore preferable.\n\nUsing these norms, we clearly see other images in numerical\nexperiments. The obtained pictures are in agreement with the\nphysical situation at hand. Several special cases will be\ndiscussed. The Mandelbrot dynamic is shown to be incompatible\nwith this structure and a better one should be derived from\ninhomogeneous {\\bf D$\\times$Ispin} or {\\bf Ispin} groups. The\nbreakdown of the dynamics in the hyperbolic case may be\ninterpreted as a decay process or an absorption of the particle\nconsidered.\n\nIn section II we introduce the operator spinor for {\\bf\nE(2)}\\footnote{See Hestenes \\cite{Hes-mech} for an enlargement\nto 3--dimensional space and the spinor gauge formulation of the\nKepler motion.}. In section III we introduce Clifford algebras\nwith nonsymmetric bilinear forms, described explicitly in\n\\cite{Ozi}. In section IV we recall several facts from the\ntheory of quadratic algebras as proposed in \\cite{Hah}. In\nsection V we discuss the Mandelbrot set over arbitrary\ntwo--component number systems using the (pseudo) normforms\ninduced by algebraic and geometric considerations. The\nconclusion summarizes our results and compares it to other work\ndone.\n\n\\section*{II Complex Numbers as Operator Spinors on {\\bf E(2)}}\n\nIn this section we introduce the concept of algebraic spinors in\nthe sense of Hestenes \\cite{Hes-mech,HesSob} and others\n\\cite{Che,Rie}. Therein a geometrical meaning is given to\nthe complex numbers. Indeed they are commonly identified with\ncoordinates, which is quite obscure. There is a long quest in\nmathematics to give a geometrical meaning to the complex entities.\nIn spite of their natural occurring in algebraic geometry they\nremained somehow mysterious \\cite{Kle}. An attempt in this direction\nseems very useful in the light of the ubiquitous appearance of\ncomplex numbers and {\\it complex coordinates} in physics.\n\nWe choose a standard orthonormal basis $e_i$,~$i\\in (1,2)$ of\n$CL(\\mbox{\\bf E(2)},\\delta)$ with the properties (algebraproduct\nby juxtaposition)\n\\begin{eqnarray}\ne_i e_i = e_i^2=1&&\\mbox{normalized} \\nonumber \\\\\ne_ie_j+e_je_i=0,&& i\\not= j;\\quad\\mbox{orthogonality.}\n\\end{eqnarray}\n\nThe standard involution (conjugation) may be defined by\n\\begin{eqnarray}\nJ &:& CL \\rightarrow CL \\nonumber \\\\\nJ &:& J(ab):=J(a)J(b) \\nonumber \\\\\nJ &:& J\\vert_{{\\bf K}\\oplus{\\bf E(2)}}:=id_{\\bf K}-id_{\\bf E(2)}\n\\nonumber \\\\\n && J^2=id_{CL}.\n\\end{eqnarray}\nThe reversion is the main antiautomorphism defined by (see\n\\cite{BudTra})\n\\begin{eqnarray}\n\\tilde{~} &:& CL\\rightarrow CL \\nonumber \\\\\n\\tilde{~} &:& (ab)\\tilde{~} :=b\\tilde{~} a\\tilde{~} \\nonumber \\\\\n\\tilde{~} &:& \\tilde{~}\\vert_{{\\bf K}\\oplus{\\bf E(2)}}:=id_{\n{\\bf K}\\oplus{\\bf E(2)}}\\nonumber \\\\\n&& (\\tilde{~})\\tilde{~}=id_{CL},\n\\end{eqnarray}\nwhere {\\bf K} and {\\bf E(2)} are the images of the field (ring)\nand the vector space (module) in $CL$. Because there is a natural\ninjection we will not distinguish between this pictures.\n\nAn algebra basis is given by\n\\begin{eqnarray}\n\\{X_I \\} &:=& \\{1,e_2\\wedge e_1 ,e_1,e_2\\}\\label{X}\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\ne_i\\wedge e_j&:=&\\frac{1}{2}(e_i e_j +J(e_j) e_i) \\nonumber \\\\\ne_i \\mbox{$\\,$\\rule{1ex}{0.4pt}\\rule{0.4pt}{1ex}$\\,$} e_j&:=&:=\\delta_{ij}=\\frac{1}{2}(e_1\ne_2-J(e_2) e_1).\\label{con}\n\\end{eqnarray}\nWe may use the symbol '$i$` to denote $e_2\\wedge e_1$, because of\nthe properties\n\\begin{eqnarray}\ni^2&=&(e_2\\wedge e_1)(e_2\\wedge e_1)=e_2 e_1 e_2\ne_1=-e_2^2e_1^2=-1\\nonumber \\\\{}%\n[i,X_I]_{deg\\vert X_I\\vert}&=&0.\n\\end{eqnarray}\nWhere $deg\\vert X_I\\vert$ means the {\\bf Z}$_2$--grade of the\nhomogeneous element $X_I$. For (odd) even elements we have the\n(anti)commutator. This graduation is induced by $J$ and has the\nstructure\n\\begin{eqnarray}\nCL&=CL_+\\oplus CL_-,\n\\end{eqnarray}\nwhere the even (+) part constitutes a subalgebra, whereas the\nodd (-) part has only a $CL_+$ module structure. In the {\\bf\nE(2)} case, $CL_+$ is generated by $\\{1,i\\}$ and is itself a\nClifford algebra isomorphic to the field ${\\bf C}$ of complex numbers..\n\nNow we want to emphasize the operator character of complex\nnumbers. Therefore we calculate the left action of the element\n'$i$` on the base vectors\n\\begin{eqnarray}\ni e_1 &=& e_2\\nonumber \\\\\ni e_2 &=& -e_1,\n\\end{eqnarray}\nwhich is a counter clockwise rotation by $\\pi\/2$. If we choose\nan arbitrary unit reference vector $e$, $e^2=1$, we may\nwrite the elements in $CL_+$ as\n\\begin{eqnarray}\nCL_+\\ni z&:=&x+y i,\\quad x,y \\in {\\bf K}.\n\\end{eqnarray}\nSo we have\n\\begin{eqnarray}\nz=ze^2=(x e+ y ie)e=v e\n\\end{eqnarray}\nwith $v \\in {\\bf E(2)}$. From\n\\begin{eqnarray}\n\\{e, ie\\}_+=eie+iee=-ie^2+ie^2=0\n\\end{eqnarray}\nwe have $ e \\perp ie$. Thus $\\{e,ie\\}$ span {\\bf E(2)}, and\nwe are able to rotate the coordinate system by an orthogonal\ntransformation to achieve\n\\begin{eqnarray}\ne =e_1\\nonumber \\\\\ni e =e_2.\n\\end{eqnarray}\nThe map $z \\rightarrow ze$ is a bijective map from $CL_+\n\\rightarrow {\\bf E(2)}$, because $e$ is invertible. On the other\nhand we may look on the map $z : {\\bf E(2)} \\rightarrow {\\bf\nE(2)}$ with $za \\rightarrow v$ for arbitrary $a \\in$ {\\bf E(2)}\nand fixed $z\\in CL_+$.\n\nOnly with the choice $e=e_1$ we are able to interpret the scalar\nand bivector part of an operator spinor to be coordinates.\n\nThe modulus may be defined as follows\n\\begin{eqnarray}\n\\vert z \\vert^2 &:=& zz\\tilde{~} =(x+iy)(x-iy)=x^2+y^2.\n\\end{eqnarray}\n\n$CL_+$ is isomorphic to the field ${\\bf C}$ and therefore\nalgebraically closed. Thus it is possible to find a root\n\\begin{eqnarray}\nz&=&w^2= zz\\tilde{~}\\frac{z}{zz\\tilde{~}}=\\vert z\\vert^2 u^2,\\quad\nuu\\tilde{~}=1,\\quad \\vert z\\vert \\in \\mbox{{\\bf K}}\\sim\\mbox{{\\bf D.}}\n\\end{eqnarray}\nWe can reformulate the map as\n\\begin{eqnarray}\nze&=&w^2e=\\vert z\\vert^2 u^2=\\vert z\\vert^2 ueu\\tilde{~}.\n\\end{eqnarray}\nThis decomposes the left action of $z$ into a dilation and a\nspinorial rotation. Because of $u\\tilde{~}=u^{-1}$, $u$ is a {\\bf\nspin(2)} transformation. The transformation obtained by the left\naction of $u$ is of half angle type.\n\nThis point of view is independent of the special vector $e$ and\nemphasizes as well the operator character of the iteration\nformula. Indeed the iteration is a sequence of maps from ${\\bf\nE(2)} \\rightarrow {\\bf E(2)}$, where $a \\in {\\bf E(2)}$ is given\nby $z_n(e)$.\n\nThis works well, because $CL_+ \\cong {\\bf C}$ is algebraically\nclosed and products as well as sums of $CL_+$ elements yields\nnew $CL_+$ elements, which can be interpreted as new operator\nspinors $z^\\prime$. In the general case, as in the hyperbolic\none, only the multiplicative structure forms a (Lipschitz) group\n{\\bf D}$\\times${\\bf spin(p,q)}, whereas the additive group is in\ngeneral incompatible with the geometric structure.\nFor example we find two timelike vectors in the forward\nlight cone, which become space like when added.\n\nA physically sound dynamic model should therefore have an\ninvariant, i.e. multiplicative structure. By studying\nmultiplicative structures in vector spaces admitting one higher\ndimension and performing a split \\cite{HesSob,HesZie} one can\nachive affine transformations with {\\bf D}$\\times${\\bf\nIspin(p,q)}. For clarity and brevity, as well as for comparison\nwith the results of the literature we omit this complication.\nHowever we take it into account, when we perform the actual\ncalculations.\n\n\\section*{III Clifford Algebras for Arbitrary Bilinear Forms}\n\nIn this section we give a brief account on Clifford algebras\nwith arbitrary bilinear forms. It's main purpose is to show the\nconnection to quadratic algebras. For a somehow polemic\ndiscussion of Grassmann or Clifford algebra as a basic tool in\nphysics we refer to \\cite{Ozi,Hes-clif}.\n\nAt first glance it is surprising to have non symmetric bilinear\nforms in Clifford algebras, because in the usual approach\n\\cite{Dir} they arise naturally with symmetric bilinear or\nsesquilinear forms. The situation looks even more puzzling when\nnoticing the universality of Clifford algebras, usually stated\nas: ''There is up to isomorphisms only one unique algebraic\nstructure compatible with a bilinear form of signature (p,q)\non the space {{\\cal V}}``.\n\nWhy is it worse to study isomorphic structures? In \\cite{Fau} it\nwas demonstrated that the physical content of a theory depends\nsensible on the embedding $\\bigwedge{{\\cal V}}\\rightarrow CL({{\\cal V}},B)$ of\nthe Grassmann or exterior algebra into the Clifford algebra.\nFor example normalordering is exactly such a change of this embedding.\n\nHere we will give the connection between such changes and\nthe properties of quadratic algebras. They provide us with\na deeper geometric understanding as described in section IV.\n\nChevalley \\cite{Che} was the first who decomposed the Clifford\nalgebra product into parts. These parts, then explicitly,\nexhibit the twofold structure of the Clifford elements.\n\nOne part acts like a derivation on the space $\\bigwedge{{\\cal V}}$ of\nexterior powers of ${{\\cal V}}$. Especially if $\\omega_x=x\\mbox{$\\,$\\rule{1ex}{0.4pt}\\rule{0.4pt}{1ex}$\\,$}$ is a\nform of degree $1$ it constitutes a form on ${{\\cal V}}$ into the field\n{\\bf K} ($x,y \\in {{\\cal V}}$,~$\\mbox{$\\,$\\rule{1ex}{0.4pt}\\rule{0.4pt}{1ex}$\\,$} : {{\\cal V}}\\rightarrow {{\\cal V}}^\\ast$),\n\\begin{eqnarray}\n\\omega_x(y)=x\\mbox{$\\,$\\rule{1ex}{0.4pt}\\rule{0.4pt}{1ex}$\\,$} y&:=&B(x,y) \\in {\\bf K}.\n\\end{eqnarray}\nThus $\\mbox{$\\,$\\rule{1ex}{0.4pt}\\rule{0.4pt}{1ex}$\\,$}$ is a dualisomorphism parameterized by $B$.\nThe second part of the Clifford product is simply the exterior\nmultiplication.\n\nMarcel Riesz \\cite{Rie} reexpressed the\ncontraction and the exterior multiplication with a grade\ninvolution $J$ and the Clifford product. He obtained for char\n{\\bf K} $\\not=$ 2, $x\\in{{\\cal V}},\\, u\\in CL$\n\\begin{eqnarray}\nx\\mbox{$\\,$\\rule{1ex}{0.4pt}\\rule{0.4pt}{1ex}$\\,$} u&:=&\\frac{1}{2}(xu-J(u)x)\\nonumber \\\\\nx\\wedge u&:=&\\frac{1}{2}(xu+J(u)x).\n\\end{eqnarray}\nOne is able to extend these operations to higher degrees of\nmultivectors by $(a,b,c \\in {{\\cal V}},\\, X\\in CL)$\n\\begin{eqnarray}\n(a\\wedge b)\\mbox{$\\,$\\rule{1ex}{0.4pt}\\rule{0.4pt}{1ex}$\\,$} X&=&a\\mbox{$\\,$\\rule{1ex}{0.4pt}\\rule{0.4pt}{1ex}$\\,$}(b\\mbox{$\\,$\\rule{1ex}{0.4pt}\\rule{0.4pt}{1ex}$\\,$} X)\\nonumber \\\\\na\\mbox{$\\,$\\rule{1ex}{0.4pt}\\rule{0.4pt}{1ex}$\\,$}(bc)&=&(a\\mbox{$\\,$\\rule{1ex}{0.4pt}\\rule{0.4pt}{1ex}$\\,$} b)c+J(b)(a\\mbox{$\\,$\\rule{1ex}{0.4pt}\\rule{0.4pt}{1ex}$\\,$} c).\n\\end{eqnarray}\nAssociativity of the wedge product and other properties are\nshown in \\cite{Rie}. A detailed account on such properties is\ngiven in \\cite{Ozi}.\n\nIn the following it is convenient to use explicit not\nnecessarily normed or orthogonal generating sets. Especially for\nlow dimensional examples this will be useful. Therefore we\ndefine the left--contraction and the exterior product\nas (${{\\cal V}}=span\\{e_1,\\ldots,e_n\\}$)\n\\begin{eqnarray}\ne_i\\mbox{$\\,$\\rule{1ex}{0.4pt}\\rule{0.4pt}{1ex}$\\,$} e_j&:=&B(e_i,e_j)\\cong [B_{ij}]\\nonumber \\\\\ne_i\\wedge e_j&:=&\\frac{1}{2}(e_i e_j- e_j e_i)\n\\end{eqnarray}\nwith the standard grade involution $J(e_i)=-e_i$.\n\nNext we may decompose $B$ into symmetric and antisymmetric part\n\\begin{eqnarray}\nG+F&:=&B \\nonumber \\\\\nG^T& =&G \\nonumber \\\\\nF^T&=&-F\n\\end{eqnarray}\nwhere ${}^T$ is the matrix transposition. In the case of an\nalgebra over the complex numbers one has to use hermitean\nadjunction, but here we deal exclusively with real algebras.\n\nWe specialize now to 2 dimensions. Then we are left with 4\nparameters, 3 of the symmetric and 1 of the antisymmetric part.\n\\begin{eqnarray}\n[G]&=&\\kla{\\begin{array}{cc}\n G_{11} & G_{12} \\\\\n G_{12} & G_{22}\n \\end{array}} \\nonumber \\\\\n{}[F]&=&\\kla{\\begin{array}{cc}\n 0 & F_{12} \\\\\n -F_{12}& 0\n \\end{array}}.\n\\end{eqnarray}\nThe even part of this algebra satisfies a quadratic equation for\nevery element $z=x+y e_2\\wedge e_1$ where\n\\begin{eqnarray}\n(e_2\\wedge e_1)^2&=&(e_2 e_1-B_{21})(e_2\\wedge e_1) \\nonumber \\\\\n &=&e_2(e_1\\mbox{$\\,$\\rule{1ex}{0.4pt}\\rule{0.4pt}{1ex}$\\,$} (e_2\\wedge e_1))-B_{21}e_2\\wedge\n e_1 \\nonumber \\\\\n &=&e_2(B_{12}e_1-B_{11}e_2)-B_{21}e_2\\wedge\n e_1 \\nonumber \\\\\n &=&(B_{12}-B_{21})e_2\\wedge\n e_1-(B_{11}B_{22}+B_{12}B_{21}) \\nonumber \\\\\n &=&-2F_{21}e_2\\wedge e_1-det(B). \\label{q1}\n\\end{eqnarray}\nNow we expand $det(B)$\n\\begin{eqnarray}\ndet(B)&=&B_{11}B_{22}-B_{12}B_{21} \\nonumber \\\\\n &=&G_{11}G_{22}-(G_{12}+F_{12})(G_{12}-F_{12}) \\nonumber \\\\\n &=&F_{12}^2+det(G) \\label{det}\n\\end{eqnarray}\nand we arrive at\n\\begin{eqnarray}\n(e_2\\wedge e_1)^2&=&2F_{12}e_2\\wedge e_1-det(G)-F_{12}^2.\n\\end{eqnarray}\nIf there is no antisymmetric part (e.g. $F_{12}\\equiv 0$), we\nspecialize to\n\\begin{eqnarray}\n(e_2\\wedge e_1)^2&=&-det(G)\n\\end{eqnarray}\nand the determinant of $G$ describes the geometry at hand.\nIf $det(G)$ is positive we arrive at a (anti) Euclidean geometry.\nIn the negative case the geometry is hyperbolic.\n\nWe are free to choose an other algebra basis via a new doted\nwedge product defined by\n\\begin{eqnarray}\ne_i\\dot{\\wedge} e_j&:=&F_{ij}+e_i\\wedge e_j.\n\\end{eqnarray}\nTherefore we have incorporated the whole antisymmetric part in\nthe wedge product and the Clifford product decomposes as\n\\begin{eqnarray}\ne_i e_j&=&B_{ij}+e_i\\wedge e_j \\nonumber \\\\\n &=&G_{ij}+e_i\\dot{\\wedge} e_j.\n\\end{eqnarray}\nWe define a new set of elements spanning the algebra, which\nreads in two dimensions\n\\begin{eqnarray}\n\\{ Y_J\\} &:=&\\{1,e_2\\dot{\\wedge} e_1,e_1,e_2\\}\\label{Y}\n\\end{eqnarray}\nwhich leads to another quadratic relation\n\\begin{eqnarray}\n(e_2\\dot{\\wedge} e_1)^2&=&(e_2\\wedge e_1+F_{21})^2 \\nonumber \\\\\n &=&(e_2\\wedge e_1)^2+2F_{21}e_2\\wedge e_1+F_{21}^2 \\nonumber \\\\\n &=&2(F_{21}+F_{12})e_2\\wedge e_1\n -det(G)-F_{12}^2+F_{21}^2 \\nonumber \\\\\n &=&-det(G). \\label{q2}\n\\end{eqnarray}\nIn this construction the linear term has been absorbed in the\ndoted wedge. The quadratic relation has simplified to the\nhomogeneous case discussed above, but now not as a special case.\n\nNext we introduce a (pseudo) norm function on the\nquadratic subalgebra. In the Clifford algebra there are several\nsuch constructions, known as spinor norms \\cite{Lou-spin,Cru}.\n\nFor vector elements $a\\in {{\\cal V}}$ every of the three following maps\nhas the image in the field {\\bf K}. With the standard involution\n$J(a)=-a$ we have\n\\begin{eqnarray}\na^2\\rightarrow {\\bf K};\\quad\naJ(a\\tilde{~})=aJ(a)\\tilde{~}=-a^2\\rightarrow {\\bf K};\\quad\naa\\tilde{~}=a^2\\rightarrow {\\bf K}\n\\end{eqnarray}\nBut applying this to bivector elements, the first relation maps\nnot in the field, but is exactly the quadratic relation derived\nabove. If we allow arbitrary involutions, $a$ and $J(a)$ need\nnot be parallel vectors, as is the same for bivector elements.\nIn general we may not expect this map to be scalar valued. To\nanalyze the third map, we recognize\n\\begin{eqnarray}\n(e_i\\wedge e_j)\\tilde{~} &=& (e_i e_j-B_{ij})\\tilde{~}=e_j e_i-B_{ij} \\nonumber \\\\\n &=&e_j\\wedge e_i+B_{ji}-B_{ij} \\nonumber \\\\\n &=&e_j\\wedge e_i-2F_{ij} \\nonumber \\\\\n &=&- e_i\\wedge e_j-2F_{ij}.\n\\end{eqnarray}\nThus we have with (\\ref{q1})\n\\begin{eqnarray}\n(e_i\\wedge e_j)(e_i\\wedge e_j)\\tilde{~}&=&-(e_i\\wedge\n e_j)^2-2F_{ij}e_i\\wedge e_j \\nonumber \\\\\n &=&det(B)+2F_{ij}e_i\\wedge e_j-2F_{ij}e_i\\wedge ej \\nonumber \\\\\n &=&det(B)\n\\end{eqnarray}\nas with (\\ref{q2}) in the same manner for the doted case\n\\begin{eqnarray}\n(e_i\\dot{\\wedge} e_j)(e_i\\dot{\\wedge} e_j)\\tilde{~} &=&-(e_i\\dot{\\wedge}\ne_j)(e_i\\dot{\\wedge} e_j)\\nonumber \\\\\n&=&det(G).\n\\end{eqnarray}\nThis reduces in the case of two dimensions to\n\\begin{eqnarray}\n(e_2\\dot{\\wedge} e_1)(e_2\\dot{\\wedge} e_1)\\tilde{~} &=&det(G).\n\\end{eqnarray}\nwhich is the discriminant of the quadratic equation (\\ref{q2}).\nIf the discriminant is positive, there are two roots in the\nalgebra (over {\\bf R}), whereas otherwise the field has to be\nalgebraically closed (i.e. {\\bf C}), or extracting the root is\nnot possible.\n\nWe introduce the abbreviation $X \\cong\\{e_2\\wedge e_1$ or\n$e_2\\dot{\\wedge} e_1\\}$, $a\\cong \\{2F_{12}$ or $0 \\}$, $b\\cong \\{\ndet(B)$ or $det(G) \\}$ and may\nwrite the quadratic algebra as\n\\begin{eqnarray}\n\\frac{K[X]}{X^2-aX-b},\n\\end{eqnarray}\nthe polynom algebra generated by $X$ over the field ${\\bf K}$\nmodulo the quadratic relation. This form is important for\ncomparison with the theory of quadratic algebras, but our aim\nis the exposition of the connection to the geometric relations\nencoded in this formula.\n\n\\section*{IV Quadratic Algebras, Conjugation and Special Elements}\n\nIn this section we give some results exposed in \\cite{Hah},\nwhich provide us with existence and uniqueness theorems. This\nsupplies more fondness to our somehow loosely construction above.\n\nThe constructions are valid in much more general settings, as\nClifford algebras over finite fields or over modules, which is\nyet not needed but quite interesting.\n\nLet ${{\\cal R}}$ be a commutative ring and ${{\\cal I}}$ an ideal of ${{\\cal R}}$, then\n${{\\cal R}}\\rightarrow {{\\cal R}}\/{{\\cal I}}={{\\cal A}}$ is in a natural way a ${{\\cal R}}$--algebra.\n\nA free quadratic algebra is obtained if one factorizes the\npolynom algebra in the indeterminate $X$ over ${{\\cal R}}$ by the ideal\n${{\\cal I}}=(X^2-aX-b)$\n\\begin{eqnarray}\nS&=&\\frac{{{\\cal R}}[X]}{X^2-aX-b}.\n\\end{eqnarray}\nThe identity is\n\\begin{eqnarray}\n1_S&=&1+{{\\cal I}}=1+(X^2-aX-b)\n\\end{eqnarray}\nand because $r\\in{{\\cal R}}$, $r\\rightarrow r1_S$ is injective we have\n${{\\cal R}}\\subseteq S$. A basis of $S$ as ${{\\cal R}}$--module is given by\n\\begin{eqnarray}\n\\{1&,& v=X+{{\\cal I}}=X+(X^2-aX-b) \\}.\n\\end{eqnarray}\nIt follows that\n\\begin{eqnarray}\nv^2&=&av+b.\n\\end{eqnarray}\nWe have several isomorphisms:\n\\begin{eqnarray}\nS_1&=&\\frac{{{\\cal R}}[X]}{X^2-X}\\cong {{\\cal R}}\\oplus {{\\cal R}}\n\\end{eqnarray}\nwith the diagonal product map. $\\phi : {{\\cal R}}\\oplus {{\\cal R}} \\rightarrow\nS_1$ is an isomorphism and we denote $S_1$ as trivial quadratic\n${{\\cal R}}$--algebra. This is the ''perplex`` case from above!\n\\begin{eqnarray}\nS_2&=&\\frac{{{\\cal R}}[X]}{X^2}\n\\end{eqnarray}\nconstitutes the algebra of dual numbers.\n\nDenoting the units of ${{\\cal R}}$ as ${{\\cal R}}^\\ast$, we can formulate the\nisomorphism criterion \\cite{Hah}(1.1).\n\n{\\it Let ${{\\cal R}}[X]\/(X^2-aX-b)$ and ${{\\cal R}}[X]\/(X^2-cX-d)$ be quadratic\nalgebras over ${{\\cal R}}$. Then\n\\begin{eqnarray}\n\\frac{{{\\cal R}}[X]}{X^2-aX-b}&\\cong&\\frac{{{\\cal R}}[X]}{X^2-cX-d} \\nonumber\n\\end{eqnarray}\niff there exist elements $r\\in{{\\cal R}}$ and $u\\in{{\\cal R}}^\\ast$ such that\n(i) $ c=ua+2r$ (ii) $d=u^2b-rua-r^2$.}\n\nNow, has $X^2-aX-b$ a root, say $\\gamma$ in ${{\\cal R}}$ then we have\n\\begin{eqnarray}\nX^2-aX-b&=&(X-\\gamma)(X-(a-\\gamma)) \\label{root}\n\\end{eqnarray}\nand $a-\\gamma$ is another root (in ${{\\cal R}}$) of the quadratic\nequation. If $a-\\gamma=\\gamma$ then $\\gamma$ is a double root.\nWe can state the following\n\\begin{eqnarray}\n\\begin{array}{ccccl}\n(1) & \\frac{{{\\cal R}}[X]}{X^2-aX-b} &\\cong& \\frac{{{\\cal R}}[X]}{X^2-taX-t^2b} &\n\\mbox{for $t\\in {{\\cal R}}^\\ast$} \\\\\n(2) & \\frac{{{\\cal R}}[X]}{X^2-aX-b} &\\cong& \\frac{{{\\cal R}}[X]}{X^2-cX} &\n\\Leftrightarrow X^2-aX-b\\quad \\mbox{has a root in ${{\\cal R}}$} \\\\\n(3) & \\frac{{{\\cal R}}[X]}{X^2-aX-b} &\\cong& \\frac{{{\\cal R}}[X]}{X^2} &\n\\Leftrightarrow X^2-aX-b\\quad \\mbox{has a double root in ${{\\cal R}}$}\n\\end{array} \\nonumber\n\\end{eqnarray}\n{\\bf Examples:}\n\\begin{enumerate}\n\\item ${{\\cal R}}\\cong {\\bf C}$: The only quadratic algebras are ${\\bf\nC}[X]\/(X^2-X)$ the complex trivial algebra and ${\\bf\nC}[X]\/(X^2)$ the algebra of complex dual numbers, because of the\nexistence of roots for every $X\\in{\\bf C}$.\n\n\\item ${{\\cal R}}\\cong {\\bf R}$: We have three cases, because negative\nnumbers posses no roots in ${\\bf R}$.\n\\begin{eqnarray}\nS &\\cong&\n\\left\\{\\begin{array}{ccl}\n\\frac{{\\bf R}[X]}{X^2-X} & a^2+4b > 0 & \\mbox{trivial, ''perplex``} \\\\\n\\frac{{\\bf R}[X]}{X^2} & a^2+4b = 0 & \\mbox{dual numbers} \\\\\n\\frac{{\\bf R}[X]}{X^2+1}\\cong {\\bf C} & a^2+4b < 0 &\n\\mbox{complex numbers}\n\\end{array}\\right.\n\\end{eqnarray}\nOf course, $a^2+4b$ is the discriminant of the quadratic relation.\n\n\\item ${{{\\cal R}}\\cong {\\bf Z}}$: Results in the infinitely many\nisomorphism classes\n\\begin{eqnarray}\n\\frac{{\\bf Z}[X]}{X^2-aX-b}&\\cong&\\left\\{\n\\begin{array}{cl}\n\\frac{{\\bf Z}[X]}{X^2-n} & \\mbox{if $a$ is even} \\\\\n\\frac{{\\bf Z}[X]}{X^2-X-n} & \\mbox{if $a$ is odd}\n\\end{array}\\right.\n\\end{eqnarray}\n\\end{enumerate}\nIt turns out, that simpler rings (as also finite Galois\nfields) bear much more structure.\n\nIf $\\alpha$ is an (anti) automorphism and $\\alpha^2=id_{{\\cal A}}$, then\n$\\alpha$ is an involution. Algebra homomorphisms which preserve\nsuch a structure are called graded homomorphisms. In physics the\nsuper symmetric transformations (mixing of Grassmann parity)\nare not grade perserving and thus minor symmetric.\n\nIn a quadratic algebra we may introduce the involution $\\sigma :\nS \\rightarrow S$ on the base\n\\begin{eqnarray}\n\\{ 1, v&=&X+(X^2-aX-b) \\} \\nonumber \\\\\n1^\\sigma &=& 1 \\nonumber \\\\\nv^\\sigma &=& (a-v)\n\\end{eqnarray}\nwhich results in\n\\begin{eqnarray}\n(x+yv)^\\sigma &=& (x+ya)+yv \\nonumber \\\\\n(x+yv)^{\\sigma\\sigma} &=& (x+yv).\n\\end{eqnarray}\nThis involution interchanges the roots of the quadratic relation\n(\\ref{root}). In the complex case this is the ordinary complex\nconjugation.\n\nNow this kind of involution is ''standard`` in\nquadratic algebras and induces the {\\bf Z}$_2$--grading of the\nClifford algebra.\nLet ${{\\cal A}}_i$, $i\\in (0,1)$ be ${{\\cal R}}$--submodules and ${{\\cal A}}=A_0\\oplus\n{{\\cal A}}_1$. As $\\R1 \\subseteq {{\\cal A}}_0$, ${{\\cal A}}$ is a {\\bf Z}$_2$--graded\nalgebra. Elements $a\\in {{\\cal A}}_i$ ($\\partial a=i$, grade of $a$) are\ncalled homogeneous.\n\nWe have two possibilities to introduce tensor products in graded\nalgebras via\n\\begin{eqnarray}\n{{\\cal A}} {\\otimes}_{{\\cal R}} {{\\cal B}} &:& (a\\otimes b)(a^\\prime \\otimes\nb^\\prime):=(aa^\\prime\\otimes bb^\\prime) \\nonumber \\\\\n{{\\cal A}} {\\hat\\otimes}_{{\\cal R}} {{\\cal B}} &:& (a\\otimes b)(a^\\prime \\otimes\nb^\\prime):=(-)^{\\partial a^\\prime\\partial b}(aa^\\prime\\otimes\nbb^\\prime).\n\\end{eqnarray}\nTo the algebra ${{\\cal A}}$ we find the opposite algebra ${{\\cal A}}^{op}$ by\nreversing the product, $(ab)^{op}=b^{op}a^{op}$, which is a map\nfrom ${{\\cal A}}$ into ${{\\cal A}}^{op}$. Hence we construct the enveloping\nalgebra ${{\\cal A}}^e:={{\\cal A}}\\otimes_{{\\cal R}} {{\\cal A}}^{op}$, as a (${{\\cal A}},{{\\cal A}}$)--bimodule.\nThere is a unique homomorphism $\\Phi : {{\\cal A}}^e\\rightarrow {{\\cal A}}$,\nwhich satisfies $\\Phi ( a\\otimes b^{op})=ab$. If there exists also\na homomorphism $\\Theta : {{\\cal A}}\\rightarrow {{\\cal A}}^e$ (coproduct) such that\n$\\Phi\\Theta=id_{{\\cal A}}$ then the algebra is called separable. It follows\nthen ${{\\cal A}}^e={{\\cal A}}\\otimes_{{\\cal R}}{{\\cal A}}^{op}=ker(\\Phi)\\oplus\\Theta({{\\cal A}})$. With\n\\cite{Hah}(2.1)(2.3) we state:\n\\begin{enumerate}\n\\item {\\it ${{\\cal A}}$ is a separable ${{\\cal R}}$--algebra iff ${{\\cal A}}$ has a\nseparability idempotent, $e=\\Theta(1)$.}\n\\item {\\it The separability idempotent of a free quadratic\nalgebra $S$ over ${{\\cal R}}$ is unique.}\n\\end{enumerate}\nNow it is possible to classify separable free quadratic\n${{\\cal R}}$--algebras by introducing the group $Qu_f({{\\cal R}})$. Therefore define\n\\begin{eqnarray}\nQ&:=&\\{(a,b) \\vert a^2+4b\\in {{\\cal R}}^\\ast\\}\n\\end{eqnarray}\nwith the product\n\\begin{eqnarray}\n(a,b)*(c,d)&:=&(ac,a^2d+c^2b+4bd)\n\\end{eqnarray}\nand the quotients $[a,b]=(a, b)\/{{\\cal R}}^{\\ast 2}$ form the group\n\\begin{eqnarray}\nQu_f({{\\cal R}})&:=&\\{[a,b]\\vert (a,b)\\in Q\\}.\n\\end{eqnarray}\nThe cardinality of $Qu_f({{\\cal R}})$ is the number of isomorphism\nclasses of (nontrivial) quadratic algebras, e.g.\n\\begin{eqnarray}\nQu_f({\\bf C})&\\cong&\\frac{{\\bf C}^\\ast}{{\\bf C}^{\\ast^2}}=1 \\nonumber \\\\\nQu_f({\\bf R})&\\cong&\\frac{{\\bf R}^\\ast}{{\\bf R}^{\\ast^2}}={\\bf Z_2}.\n\\end{eqnarray}\nThis can be extended to a graded version $QU_f({{\\cal R}})$.\n\nOne observes a connection between grading and standard\ninvolution via the special elements. To see this,\nlet $M=({{\\cal V}},B)$ be the pair of a vector space with a bilinear\nform (or a quadratic module), then we can build the Clifford\nalgebra $CL(M)=CL({{\\cal V}},B)$. We get now \\cite{Hah}(5.4)\n\n{\\it The decomposition $CL(M)=CL_0(M)\\oplus CL_1(M)$ is a ({\\bf\nZ}$_2$) grading of $CL(M)$. $CL_0(M)$ is a subalgebra and\n$CL_1(M)$ is a $CL_0(M)$ module.}\n\nWe call $\\sigma$ a standard involution, if $\\sigma$ is a\nantiinvolution and $aa^\\sigma\\in{{\\cal R}}$ $\\forall a\\in CL(M)$. In\nthis case we define a (pseudo) norm and a trace as\n\\begin{eqnarray}\nnr(a)&:=&aa^\\sigma (\\in {{\\cal R}}) \\nonumber \\\\\ntr(a)&:=&a+a^\\sigma (\\in {{\\cal R}}). \\label{norm}\n\\end{eqnarray}\n\nAn element $z\\in CL(M)$ is a special element if $\\{1,z\\}$ is a\nbasis of the centralizer $Cen_{CL(M)} CL_0(M)=\\{c\\in CL(M) \\vert\ncd=dc\\,\\, \\forall d\\in CL_0(M)\\}$. One can conclude\nthat\\footnote{See \\cite{Hah} (8.3)}\n\\begin{enumerate}\n\\item if rank $M$ is odd, then $z\\in CL_1(M)$, $z^\\sigma=-z$ and\n$z^2=b$ with $b\\in{{\\cal R}}$.\n\\item if rank $M$ is even, then $z\\in CL_0(M)$, $z^\\sigma=a-z$\nand $z^2=az-b$ with $a,b\\in{{\\cal R}}$, $a^2+4b\\in{{\\cal R}}^\\ast$.\n\\end{enumerate}\n\n{\\it If $\\gamma$ is a root of $X^2-aX-b$ in $S$, then $S$ has\nthe grading $S=S_0\\otimes S_1$, where\n\\begin{eqnarray}\nS_0&=&Cen_S(\\gamma)=\\{s\\in S \\vert \\gamma s= s\\gamma \\} \\nonumber \\\\\nS_1&=&\\{s\\in S \\vert \\gamma s+ s\\gamma=as \\}.\n\\end{eqnarray}}\n\nThe grading is trivial if $\\gamma$ is in the center of $S$.\n\nThese properties are strongly interwoven and can be used in\nconstructing representations of Clifford algebras \\cite{Dim}.\n\nThe existence of special elements in a general setting is proved\nin chapter 10 of \\cite{Hah}. All this constructions are possible\nin higher dimensions, but our above naive geometric\ninterpretation has then to be refined.\n\nAs a last topic we have a look at the representations of\nClifford algebras. A homomorphism $\\Phi$ of ${{\\cal R}}$--algebras\n\\begin{eqnarray}\n\\Phi &:& CL(M)\\rightarrow End_S(P)\n\\end{eqnarray}\nwhere $M$ is a quadratic module, $S$ a ${{\\cal R}}$--algebra and $P$ a\nright $S$--module is called an ($S$--)representation of $CL(M)$.\nIn \\cite{Hah}(8.7)(8.8) the connection between division\nalgebras, gradings and the quadratic algebra is shown and\nconnected with the existence of roots in $S$. The quadratic\nalgebra $S$ appears as tensor factor in such representations.\n\nDespite the universality of Clifford algebras, we need for\nphysical applications norms and traces, as the grading\n(vector space and {\\bf Z}$_2$). Therefore we have to distinguish\nbetween such homomorphisms preserving this additional structures\nand those doing this not. The free quadratic groups etc.\ncharacterize the isomorphism classes available, where the kernel\nof the factorization parameterizes distinct but isomorphic\nrepresentatives. Such a parameterization can be done equally well\nby parameterizing the isomorphic ideals in constructing Clifford\nalgebras. This was done in \\cite{Ozi-hecke}.\n\n\\section*{V The 2--dim. Case and Numeric Experiments}\n\nIn this section we discuss the 2--dim. case\nover {\\bf R}. Because of the even dimension we expect to have\ninhomogeneous terms and a rich structure. Using the natural\ngiven standard involution, we can construct (pseudo) norms and\ntraces as above and investigate the Mandelbrot set in each of the\nthree cases. We do not get a quadratic set, but a ''light cone``\nstructure in the hyperbolic case.\n\nLet us choose a generating set $\\{e_1, e_2\\}$ as in (\\ref{X}) or\nin (\\ref{Y}), and a bilinear form $B=G+F$ as above. We denote\nthe bivector element as $\\gamma$. Here one has to take care,\nbecause a change in the quadratic relation of $\\gamma$ is\nrelated with a change of the representation of the\nalgebra. In the same time this results in a redefinition of the\nwedge, as above done by using the two extreme cases $\\wedge$ and\n$\\dot{\\wedge}$.\n\nWe interpret the iteration formula in the operational sense\nexplained in section II and plot all figures in the\n$\\{1,\\gamma\\}$--plane. A translation into ${{\\cal V}}$ may be done as\nexplained also in section II. For comparison to other results\nthis is here {\\it not} done.\n\nWith Sylvester's theorem we could achieve a diagonal form for the\nsymmetric part of $G$ with $G\\equiv diag\\{\\pm 1, \\pm 1\\}$. But a\nrescaling of the base vectors by $\\sqrt{\\vert G_{11}\\vert}$ and\n$\\sqrt{\\vert G_{22}\\vert}$ would affect the magnitude of the\nnonsymmetric part also. This is contained in the isomorphism\ncriterion and will therefore be done only in the quadratic\nalgebra and not in the whole Clifford algebra.\n\nDefine $S={{\\cal R}}[\\gamma]\/(\\gamma^2-a\\gamma-b)$. We get from (\\ref{det})\n\\begin{eqnarray}\n\\gamma^2&=&2F_{12}\\gamma-det(B) \\nonumber \\\\\n &=&2F_{12}\\gamma-det(G)-F_{12}^2.\n\\end{eqnarray}\nHence we set\n\\begin{eqnarray}\na&:=&2F_{12} \\nonumber \\\\\nb&:=&-det(B)=-det(G)-F_{12}^2.\n\\end{eqnarray}\nThe discriminant is connected to the metric via\n\\begin{eqnarray}\nd\\, =\\, a^2+4b&=&-4det(G).\n\\end{eqnarray}\nThe discriminant of the quadratic relation is thus $-4$ times\nthe determinant of the symmetric part of the bilinear form!\n\nIf $det(G)=0$ we have 2 roots in the algebra $S$. If the algebra\nis not dual (double root) we have $det(G)\\not= 0$.\n\nAs explained in section IV, there exists a standard involution\nif the algebra $S$ is separable. In our case this is the\nreversion in $CL$. It is constructed in the base $\\{1,\\gamma\\}$ by\n\\begin{eqnarray}\n1^\\sigma &=& 1 \\nonumber \\\\\n\\gamma^\\sigma &=& (a-\\gamma)=2F_{12}-\\gamma.\n\\end{eqnarray}\nThe inhomogeneous additive term is quite uncommon in usual\napproaches to Clifford algebras. Let us emphasize, that in the\nquantum mechanical case, where the field is complex, we are able\nto find {\\it always} an algebra isomorphism to achieve\n$\\gamma^{\\prime\\sigma}=\\gamma^\\prime$. Because there is only one\nnontrivial isomorphism class. This may be an argument for using\ncomplex numbers in quantum mechanics.\n\nThe iteration formula is now obtained as follows\n\\begin{eqnarray}\nz&:=&z_x+z_y\\gamma;\\quad c:=c_x+c_y\\gamma\n\\end{eqnarray}\nand with\n\\begin{eqnarray}\nz_{n+1}&:=&z_n^2+c\n\\end{eqnarray}\nwe obtain\n\\begin{eqnarray}\nz_{x(n+1)} &=& z_{x(n)}^2+bz_{y(n)}^2+c_x \\nonumber \\\\\nz_{y(n+1)} &=& 2z_{x(n)}z_{y(n)}+az_{y(n)}^2+c_y.\n\\end{eqnarray}\nThe spinor (pseudo) norm is then given by\n\\begin{eqnarray}\nnr^2(z)&=&zz\\tilde{~}=z_x^2-bz_y^2+az_xz_y.\n\\end{eqnarray}\nWe have:\n\\begin{itemize}\n\\item[i)] {\\it\n$nr^2(z)$ is positive definite in the complex isomorphism class}\n\\item[ii)] {\\it\n$nr^2(z)$ is positive semidefinite in the dual isomorphism class}\n\\item[iii)] {\\it\n$nr^2(z)$ is indefinite in the trivial isomorphism class\n(hyperbolic case)}\n\\end{itemize}\n{\\bf Proof:} We distinguish two cases\n\\begin{itemize}\n\\item[a)] Suppose that $y\\equiv 0$. We are left with\n$nr^2(z)=x^2$, which is positive definite in $x$ and $nr^2$ is\nsemidefinite in case iii), by isomorphy to the case $a=b=0$.\n\\item[b)] Suppose $y\\not= 0$. We introduce $\\rho=x\/y$ and have\n\\begin{eqnarray}\n\\frac{nr^2}{y^2}&=&\\rho^2-a\\rho-b\\nonumber \\\\\n &=&(\\rho-\\frac{a}{2})^2-(\\frac{a^2}{4}+b)\\nonumber \\\\\n &\\geq&-(\\frac{a^2}{4}+b)=-d=4\\, det(G).\n\\end{eqnarray}\nThus if the discriminant is negative the norm is positive\ndefinite. If $d=0$, the norm may be zero for non--null\nelements, that is positive semidefinite. Is $d > 0$ we are in the\nindefinite hyperbolic case. \\rule{1ex}{1ex}\n\\end{itemize}\n\nThe variable $\\rho$ is connected via an arctan or arctanh to the\nphase of $z$. But a polar decomposition is in the hyperbolic\ncase not obvious. See \\cite{Fje} and notice the appearance of\nthe Klein group.\n\nThe line ${{\\cal R}} 1_S$ is stabilized by $\\sigma$ via $z^\\sigma=z$\n$\\Rightarrow$ $y=0$. Whereas the trace maps $z$ onto ${{\\cal R}} 1_S$ as\n\\begin{eqnarray}\ntr(z)&=&\\frac{1}{2}(z+z^\\sigma)=x+\\frac{a}{2}y.\n\\end{eqnarray}\nWe define the operator spinor Mandelbrot set as\n\\begin{eqnarray}\nM&:=&\\{c\\vert z_0=(0,0),\\quad nr^2(z_n^c) > 0\\, \\forall n,\\quad\n\\lim_{n \\rightarrow\\infty}\\, nr^2(z^c_n)\\not\\rightarrow\\infty.\\}\n\\end{eqnarray}\n\nIn the complex case we set the parameters as\n\\begin{eqnarray}\na&=&0 \\nonumber \\\\\nb &=& -1,\\quad \\rightarrow\\quad \\gamma^2=-1.\n\\end{eqnarray}\nHence\n\\begin{eqnarray}\nd\\, =\\,a^2+4b&=&-4=-4\\, det(G)\\nonumber \\\\\ndet(G)&=&1,\n\\end{eqnarray}\nwhich results in a (anti) Euclidean geometry. Because of the\npositive discriminant we have always two roots in $S$. The norm\nyields\n\\begin{eqnarray}\nnr^2_{\\bf C}(z)&=&x^2+y^2.\n\\end{eqnarray}\nWe are left with the ordinary Mandelbrot set as shown in figure\n1. All pictures with $d= -4.0$, and arbitrary $a$ are isomorphic\nwithout a rescaling. But the coordinate interpretation is no\nlonger possible. If one looks at $d = -4.0$, $a=1.0$ (figure\n2.), one obtains up to an {\\bf SO(2)} transformation the\nbilinear form\n\\begin{eqnarray}\nB&=&G+F\\cong\n\\left(\n\\begin{array}{cc}\n1 & \\frac{1}{2} \\\\\n0 & 1\n\\end{array}\n\\right).\n\\end{eqnarray}\nThus we have now\n\\begin{eqnarray}\n\\gamma e_1 &=& (e_2 e_1-B_{21})e_1=e_1^2 e_2=e_2 \\nonumber \\\\\n\\gamma e_2 &=& \\frac{1}{2}e_2-1 e_1.\n\\end{eqnarray}\nThe new transformation obtained by $\\gamma$ : {\\bf\nE(2)}$\\rightarrow$ {\\bf E(2)} does not preserve angles, but areas.\nThe real axis ($\\sigma$ invariant points) is not affected by this.\nThe result is the deformed set of figure 2.\n\nIf the parameter $d$ is changed, this results in a scaling of\nthe $\\gamma$--axis. As in the above case the transition into the\nvector space is not unique. If one chooses the 1$_S$--axis to map\nonto $e_1$ then we arrive at a bilinear form like\n\\begin{eqnarray}\n[B]&=&\n\\left(\n\\begin{array}{cc}\nd & \\frac{a}{2} \\\\\n0 & 1\n\\end{array}\n\\right)\n=\\left(\n\\begin{array}{cc}\nG_{11} & F_{12} \\\\\n0 & 1\n\\end{array}\n\\right),\n\\end{eqnarray}\nwhich is a special case of the parameterization\n\\begin{eqnarray}\n[B] &=&d\n\\left(\n\\begin{array}{cc}\n\\lambda & \\frac{a}{2} \\\\\n0 & \\frac{1}{\\lambda}\n\\end{array}\n\\right).\n\\end{eqnarray}\nThis case is also isomorphic to the complex one, but this time\nwith an additional rescaling (figure 3). Areas are no longer\npreserved.\n\nThe second case is the dual one. Hence the algebra $S$ is no\nlonger separable and degenerated to a 1--dim. scheme. The\ncorresponding parameters are\n\\begin{eqnarray}\na&=&0 \\nonumber \\\\\nb&=&0,\\quad\\rightarrow\\quad\\gamma^2=0.\n\\end{eqnarray}\nThe discriminant vanishes. This case ($X^2=0$ as ideal) results\nin a degeneration of the dynamics. It is the limiting case of\nthe two other ones. The semidefinite norm is $nr^2(z)=(x-a\/2\\,\ny)^2$, which is sensitive only to one direction. In the case\nwith $a = 0$ this is the $x$--axis, see figure 4. To form a\ndefinite norm, and thus a physically meaning full situation, one\nhas to factor out the superfluous direction. So this case is\nessential one dimensional. This can take place even if $B$ is\nnondegenerate, but $G$ is still.\n\\begin{eqnarray}\n[B]&=&\n\\left(\n\\begin{array}{cc}\nG_{11} & F_{12} \\\\\n-F_{12}& 0\n\\end{array}\n\\right)=\n\\left(\n\\begin{array}{cc}\nG_{11} & 0 \\\\\n0 & 0\n\\end{array}\n\\right)\n+\\left(\n\\begin{array}{cc}\n0 & F_{12}\\\\\n-F_{12} & 0\n\\end{array}\n\\right)\n\\end{eqnarray}\nIn a physical context this case should be called trivial, but\nthis has not to be confused with the classification of the\nquadratic algebras, where this case is the dual one.\n\nThe hyperbolic (or ''perplex``) case is obtained with the\nparameter setting\n\\begin{eqnarray}\na&=&0 \\nonumber \\\\\nb&=&1,\\quad \\rightarrow\\quad \\gamma^2=1.\n\\end{eqnarray}\nThe discriminant becomes\n\\begin{eqnarray}\ndet(G)=-1,\n\\end{eqnarray}\nwhich corresponds to the hyperbolic geometry. Not every element\nin $S$ has a root in $S$ and especially $\\gamma$ has not.\n\nIn \\cite{Met} the convergence was proved with the norm $nr_{\\bf\nC}$ from above. With our considerations we get contrary\n\\begin{eqnarray}\nnr^2_{{\\bf R}\\oplus{\\bf R}}&=&x^2-y^2.\n\\end{eqnarray}\nWe recover the ''light cone``, which one is used to find in\nhyperbolic geometry. Hence $x$ is the timelike coordinate and\n$y$ the space like. Backward and forward light cone enclose the\ninvariant real line ${\\bf R} 1_S$.\n\nAs exposed in the introduction, the dynamic (iteration process)\ndoes not respect this structure. So timelike elements will\nbecome space like and vice versa. The pictures were done in such a\nway, that the iteration halted immediately whenever an element got\nspace like.\n\nThe most surprising effect is, that the light cones become\nseparated by the multi quadratic counter part of the Mandelbrot\nset. The two light cones are separated thus by a timelike\ndistance. On the real axis ${{\\cal R}} 1_S$ things doesn't change at all.\nSee figure 5.\n\nThe mono quadratic case would be reobtained if one would ignore the\nhyperbolic structure.\n\nThe hyperbolic case is the most interesting one, because of the\ndifference to the sets obtained in literature.\nThe asymptotic is as in the usual case. The separation results\nfrom a deformation of the backward cone light (negative abscissa)\nand a minor deformation of the forward cone. If the picture is\nscaled in such a way, that the separation distance is small, one\nobtains the ordinary cone structure. Points near the space like\nregion in the backward cone become space like during the\niteration. Points inside quadrangles which intersect the real\nline are non divergent points and thus the counter part to the\nmandelbrot set. The vertical cones without structure constitutes\nthe space like region.\n\nWhy is this dynamically interesting case called trivial?\n\nThis stems from the quadratic relation\n\\begin{eqnarray}\nX^2-cX&=&0.\n\\end{eqnarray}\nLet us assume that $c=1$\\footnote{The sign does not matter in\nthis case.}, then one arrives at\n\\begin{eqnarray}\nX^2&=&X,\n\\end{eqnarray}\nwhich is a projector equation. The algebra may then be\ndecomposed with help of $X$ into a direct sum.\n\\begin{eqnarray}\n1_S&=&X+(1-X)\\nonumber \\\\\nX(1-X)&=&0\\nonumber \\\\\nX^2=X && (1-X)^2=(1-X)\n\\end{eqnarray}\nTherefore $X$, $(1-X)$ are pair wise annihilating primitive\nidempotents.\n\nThe metric structure is then connected with\n\\begin{eqnarray}\n[B]=[G]+[F]&=&\n\\left(\n\\begin{array}{cc}\n0 & 0 \\\\\n0 & 0\n\\end{array}\n\\right)+\n\\left(\n\\begin{array}{cc}\n0 & \\frac{c}{2} \\\\\n\\frac{-c}{2} & 0\n\\end{array}\n\\right),\n\\end{eqnarray}\nwhich is a symplectic structure or equivalently\\footnote{With\nthe isomorphism criterion from section IV.} with\n\\begin{eqnarray}\n[B]=[G]&=&\n\\left(\n\\begin{array}{cc}\n\\frac{c^2}{4} & 0 \\\\\n0 & -1\n\\end{array}\n\\right),\n\\end{eqnarray}\nwithout any antisymmetric contribution! This corresponds to\n$X^2=c^2 > 0$. The decomposition is now obtained by the projectors\n\\begin{eqnarray}\ne_\\pm&:=&\\frac{1}{2c}(c\\pm X)\n\\end{eqnarray}\n\nThe parameter $a$ acts as in the complex case as is seen in\nfigure 6. The fact that the trivial case can always be splitted\ninto a direct sum with diagonal multiplicative structure was\nessential for the proof in \\cite{Met}.\n\nIn the pure hyperbolic case ($d = 4$, $a=0$) we have for example\n\\begin{eqnarray}\n[B]&=&\n\\left(\n\\begin{array}{cc}\n1 & 0 \\\\\n0 & -1\n\\end{array}\n\\right)\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\gamma e_1 &=& e_2\\nonumber \\\\\n\\gamma e_2 &=& e_1,\n\\end{eqnarray}\nwhich is not a rotation, but a space--time inversion. This\ntransformation flips also the orientation of ${{\\cal V}}$, thus a\nphysical interpretation should be charge or parity conjugation.\nBut there is a continuum of such transformations.\n\n\\section*{VI Conclusion}\n\nWe showed with numerical examples, that the multi quadratic\nMandelbrot set is superior to the quadratic one. The geometric\ninterpretation fits in all special cases, but then with a\ndistinguished (pseudo) norm. The operator spinor approach is the\nkey step in this consideration. In a first step we considered\nthe two dimensional case, which has to be enlarged to higher\ndimensions. Thereby the theorems from \\cite{Hah} provide us that\nthe same structure appears as tensor factor in the\nrepresentation theory of Clifford Algebras.\n\nThe strong connection between conjugation and (pseudo) norm, as\nwith the geometry of the underlying vector space (module \\ldots)\nwas shown. Thus a knowledge of the bilinear form in ${{\\cal V}}$ provides\nus with all information needed. One is able to choose even the\nspecial ideal out of the isomorphic ones. The dependence of the\nmultivector structure on this choices was shown.\n\nThe field {\\bf C} plays a special role, as the only one with a single\nnontrivial isomorphism class. This may be the origin of the\nusefulness of the complex numbers in quantum mechanics and\nnonlinear classical mechanics. This fact remains in higher\ndimensions.\n\nWe remarked the richness of this structure if the underlying\nspace is build up over rings as {\\bf Z} or finite fields as {\\bf\nF}$_q$. An investigation in this direction should result in much\nmore different cases. These cases are quite interesting in\nquantum theory, because they will be expected to be connected\nwith inequivalent representations. Normally this is achieved only\nwith infinitely many particles.\n\n\\section*{Appendix}\n\\begin{appendix}\n\\setcounter{equation}{0}\n\\renewcommand{\\theequation}{A-\\arabic{equation}}\nThe figures are calculated with 800 $\\times$ 800 points\nresolution and 500 iterations in a window $[-5:5]$ for the $1_S$\n(hor.) and $\\gamma$ (vert.) axis. If the norms got negative, the\niteration had been stopped. The potential lines give the tendency\nof reaching infinity by surmounting a given threshold in\n$n$--steps. Twelve such lines are plotted. The interior of the\nMandelbrot set and the multi quadratic set consists of non\ndivergent points. $d=a^2+4b$ is the discriminant. $a$ is as in\nthe text the linear part of the quadratic relation.\n\n\\begin{figure}\n\\centerline{\\psfig{figure=fig1.ps,height=5.5cm,width=5.5cm}\n \\hfil\\psfig{figure=fig2.ps,height=5.5cm,width=5.5cm}}\n\\centerline{\\hfil Fig.1 \\hfil\\hfil\\hfil\n Fig.2 \\hfil}\n\\centerline{\\psfig{figure=fig3.ps,height=5.5cm,width=5.5cm}\n \\hfil\\psfig{figure=fig4.ps,height=5.5cm,width=5.5cm}}\n\\centerline{\\hfil Fig.3 \\hfil\\hfil\\hfil\n Fig.4 \\hfil}\n\\centerline{\\psfig{figure=fig5.ps,height=5.5cm,width=5.5cm}\n \\hfil\\psfig{figure=fig6.ps,height=5.5cm,width=5.5cm}}\n\\centerline{\\hfil Fig.5 \\hfil\\hfil\\hfil\n Fig.6 \\hfil}\n\\end{figure}\n\\end{appendix}\n\\newpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzebxv b/data_all_eng_slimpj/shuffled/split2/finalzzebxv new file mode 100644 index 0000000000000000000000000000000000000000..e3c035eae4a56e6df26b58d0f85e722b67d38aa3 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzebxv @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nUnderstanding the physics of Type Ia Supernova (SN Ia) explosions is one of the\nmost important issues of contemporary astrophysics, given the role SNe Ia play\nas distance indicators for cosmology and as main producers of heavy elements in\nthe Universe. One of the keys to enter the secrets of SNIa explosions is\nto study their diversity.\n\nSNe Ia are thought to be the thermonuclear explosion of carbon--oxygen white\ndwarfs driven to ignition conditions by accretion in a binary system. Since\nexplosive burning of the CO mixture occurs when the white dwarf's mass is close\nto the Chandrasekhar limit, it has long been speculated that SNe~Ia should be\ngood candidates for standard candles. \n\nWhile earlier impressions were that SNe Ia are quite homogeneous, it was later\nnoted that there are intrinsic differences among them. However,\ncorrelations have been found that describe SNe Ia as a one-parameter family.\n\\citet{phi93} measured the decline in $B$-band magnitude from $B$-band maximum\nto 15 d later, a quantity he called $\\Delta m_{15}(B)$, and found that\nbrighter objects have a smaller decline rate than dimmer ones. The decline rate\ntherefore not only is useful for arranging SNe Ia in a `photometric'\none-parameter sequence, but should also reflect, although possibly in a gross\nway, SN Ia physics.\n\nThis is matched by a spectroscopic sequence \\citep{nug95}, defined by $\\cal\nR$(Si~{\\sc ii}), the ratio of the depth of two absorptions at 5800 and 6100\\,\\AA,\nboth of which are usually attributed to Si~{\\sc ii}\\ lines. This ratio correlates\nwith the absolute magnitude of SNe~Ia and, in turn, with the rate of decline.\nSpectroscopic models suggest that most spectral differences are due to \nvariations in the effective temperature. In the context of Chandrasekhar-mass\nexplosions, these variations can be interpreted in terms of a variation in the\nmass of $^{56}$Ni produced in the explosion. The relative behaviour of the two\nSi~{\\sc ii}\\ lines is, however, counterintuitive, and still lacks a thorough\ntheoretical explanation. Garnavich et al. (2004) suggest that the bluer line is\naffected by Ti~{\\sc ii}\\ lines for objects with $\\Delta m_{15}(B)\\geq1.2$, but this\nis not supported by detailed spectral synthesis studies of SN~1991bg and\nSN~2002bo (\\citealt{maz97}; \\citealt{ste05}).\n\nAlthough a one-parameter description of SNe Ia has proved to be very\nuseful, it does not completely account for the observational diversity\nof SNe Ia \\citep[e.g.\\ ][]{ben04,ben05}. In fact, earlier studies\n\\citep[e.g.\\ ][]{pat96,hat00} suggest that the photospheric expansion\nvelocity, which can be taken as a proxy for the kinetic energy release\nin the explosion, correlates with neither $\\Delta m_{15}(B)$ nor\n$\\mathcal{R}$(Si~{\\sc ii}) [see also \\citet{wel94} for an early attempt\nto correlate SNIa observables]. However, \\citet{ben05} found a spread\nin the time-averaged rate of decrease of the expansion velocity of the Si~{\\sc ii}\\ 6355-\\AA\\ \nabsorption after maximum, which might suggest another means of\nclassifying SNe Ia. They used $\\langle\\dot{v}\\rangle$ among other\nparameters to perform a computer-based hierarchical cluster analysis\nof a sample of 26 SNe Ia. This led to a partitioning of the SNe into\nthree groups, called, respectively, high velocity gradient (HVG;\n$\\langle\\dot{v}\\rangle = 97\\pm16$ km$\\;$s$^{-1}\\,$d$^{-1}$), low velocity gradient (LVG;\n$\\langle\\dot{v}\\rangle = 37\\pm18$ km$\\;$s$^{-1}\\,$d$^{-1}$), and FAINT. The FAINT group\nincludes SNe that are intrinsically dim, on average $\\sim 2$ mag\nfainter than SNe belonging to the other two groups. Their velocity\ngradient is large, $\\langle\\dot{v}\\rangle = 87\\pm20$ km$\\;$s$^{-1}\\,$d$^{-1}$. HVG and\nLVG SNe have similar mean absolute blue magnitude at maximum, but the\nHVG SNe have a smaller spread in $\\Delta m_{15}(B)$, and all SNe~Ia with\n$\\Delta m_{15}(B) \\le 1.05$ are LVGs. \\citet{ben05} confirmed the\nrelation between $\\cal R$(Si~{\\sc ii}) and $\\Delta m_{15}(B)$, but find a\nlarger scatter among LVG SNe, especially at the bright end.\n\nSpectra are an invaluable source of information, on both kinematics\nand the chemical composition of the ejecta. While other studies aim at\nextracting information by reproducing spectra or line ratios with\nmodels (Stehle et al. 2005; Bongard et al. 2005), this work is\nbased on direct measurements of spectroscopic data, providing\ntherefore a complementary approach [see also \\citet{fol04}, who \nemphasized the time evolution]. We focus on a comparison of the spectral\nproperties of different SNe. This requires a large enough number \nof objects with spectral data of good quality and at comparable \nepochs of evolution. During the last few years, the number of such \nobjects has increased tremendously.\n\nThis work is thus based upon a collection of published as well \nas unpublished spectral data for 28 SNe\nIa, 25 of which are from \\citet{ben05}. The expansion velocities of some\nclearly defined featues were measured systematically and consistently, as were\ntheir equivalent widths (EWs, see section \\ref{sec:EWMeasurementtechnique}). With the \naim of gathering information about the\ndifferences in chemical composition, we studied a number of line strength\nratios, involving lines of intermediate mass elements (IME) such as S or Si as\nwell as lines of Fe group elements. Thus, our study should explore the extent\nof nuclear burning in different objects. It also turned out\nto be interesting to look at ratios involving a line of oxygen as\nrepresentative of unburned or partially burned material. Taking into account seven \ndifferent lines, this paper aims to extend the studies cited above.\n\nMeasurements were performed for different objects at comparable epochs of\nevolution, so that values can be contrasted, differences examined and links to \nthe physical properties of the objects identified. One possible approach is to\nexamine the relation between line velocities and strenghts on the one hand and\nthe light curve decline rate on the other. \n\n\n\\section{Spectral data and measurements}\n\\label{sec:measurements}\n\nWe used the sample of SNe from \\citet{ben05}, except for one object (1999cw)\nfor which there was no spectum at a suitable epoch (see below). Three new\nobjects (1991M, 2004eo, 2005bl) were added; the sample was divided into the\nthree groups defined in \\citet{ben05}. An overview of the objects, group\nassignment, $\\Delta m_{15}(B)$ and $\\langle\\dot{v}\\rangle$ values, and the sources of the\nrespective spectra is given in Table \\ref{tab:objtable}. These data are taken\nfrom \\citet{ben05} -- see also references therein -- except for additional\/updated\nobjects (see table notes), for which they were newly calculated.\n\nFor our measurements, we selected from our database about 100 spectra containing\n at least one absorption that can be clearly identified.\n\nIn a SN Ia spectrum near maximum, several such features can be seen; \nTable \\ref{tab:featuretable} lists the lines used for this study. The EW ratios \nconsidered\\footnote{Note that we use the EW as a measure for \nline strengths, while \\citet{nug95} use line depth.} are given in Table \n\\ref{tab:ratiotable}. For an overview of features, see Fig. \\ref{spectrum}.\n\nStarting from the blue end, a feature typical for SNe Ia shows up \naround $3750\\,$\\AA\\, which is due to the Ca~{\\sc ii}\\ H\\&K lines. This feature \nis, however, not evaluated in this study because the spectra of some \nobjects do not extend far enough to the blue to include that feature.\n\nThe first two features evaluated, viewed from the blue, are troughs (i.e.\nunresolved blends of many lines) mostly related to Fe lines. The bluer one is\ncentered around $\\sim4300\\,$\\AA\\ in the observed spectra, and is made not only of\nFe~{\\sc ii}\\ and Fe~{\\sc iii}\\ lines, but has also contributions from Mg~{\\sc ii}\\, for the\nfainter SNe and -- for very faint objects -- strong Ti~{\\sc ii}\\ lines. The other \ntrough is centered around $\\sim4800\\,$\\AA\\ in the spectra and contains lines of\nFe~{\\sc ii}\\ and Fe~{\\sc iii}\\ with almost no contamination by other elements except some\nSi~{\\sc ii}\\ lines. Both features are frequently accompanied by small `notches' at\ntheir edges. These are weaker features that are not necessarily caused by Fe\nlines. The determination of the edges of the troughs is therefore difficult;\nspecial care was taken that this was done in a consistent way for all\nobjects. Even then, measurements of the $\\sim4300$-\\AA\\ trough for\nthe faintest objects cannot be compared with the rest; the appearance of Ti~{\\sc ii}\\\nlines affects not only the depth of the trough but also its blue extent. \nAlso, measurements for 1991T-like objects should be taken with some\ncare because of the Fe~{\\sc iii}\\ lines dominating the feature before maximum\nlight (see \\citealt*{maz95}).\n\nLines related to the IME ions S~{\\sc ii}\\ and Si~{\\sc ii}\\ are perhaps the most \ncharacteristic features of SN Ia spectra. S~{\\sc ii}\\ absorption causes a W-shaped\ntrough with observed minima at $\\sim 5250$ and 5400\\,\\AA, respectively. The\nminima can be attributed to S~{\\sc ii}\\ multiplets with their strongest lines at \n$\\sim 5445$ and 5640\\,\\AA, respectively. All those transitions \noriginate from relatively\nhigh-lying lower levels, which causes a significant weakening of the S~{\\sc ii}\\\nabsorption at low temperatures. Si~{\\sc ii}\\ features, on the other hand, can be seen\nat $\\sim 5750$ and 6100\\,\\AA. Both features are blends, with average rest\nwavelength 5972 and 6355\\,\\AA, respectively. The redder line is by far the\nstronger, and its temperature dependence is rather weak. On the other hand, the\nstrength of the 5972-\\AA\\ absorption is strongly correlated with temperature,\nbut the correlation is opposite to intuitive expectations from atomic physics\n(see also section \\ref{sec:EW_Si5972}). An explanation might involve the\npresence of other, weaker lines.\n\nIn the spectra of almost all SNe, another feature is\nvisible around $7500$\\,\\AA\\ near maximum light, which is a blend of two very close\nO~{\\sc i}\\ lines with a mean rest wavelength of 7773\\,\\AA. The absorption is especially prominent\nin spectra of FAINT SNe. Unfortunately, the feature suffers from contamination \nby the telluric absorption at $7605$\\,\\AA\\ (and sometimes perhaps also by Mg~{\\sc ii}\\ \nlines), which makes measurements complicated (see section \\ref{sec:EWMeasurementtechnique}).\n\nAll measurements were carried out using {\\sc iraf} (see Acknowledgments). The spectra \nwere deredshifted; $z$ values were taken from wavelength measurements of interstellar \nNa~{\\sc i}\\ D and Ca~{\\sc ii}\\ H\\&K lines, or -- if this was uncertain -- from either the literature \nif any was available or from the LEDA or NED catalogues (using galaxy recession\nvelocities). No reddening correction was applied to the spectra. As discussed\nbelow, reddening should have negligible impact on the measured values.\n\nThe spectral data do not cover the same epochs for all objects. In order to\ninclude as many SNe as possible, we interpolated\/extrapolated EW and velocity\nvalues at $t=0\\;\\textrm{d}$ for each object by performing a least-squares fit of\nthe measured values at different times (this also yields standard deviations\nused as statistical error estimates). Since the features can be assumed to\nevolve linearly in a limited interval of time, we evaluated spectra from --5 to\n+5~d relative to $B$ maximum. In cases where only one or two spectra were\navailable in this range, additional spectra from epochs between --8 and +6.5~d\nwere used if available (for measurements of the Si~{\\sc ii}\\ $\\lambda5972$ EW and\nvelocity, and of the S~{\\sc ii}\\ EW, the upper limit was +5~d in all cases, as these\nvalues evolve rapidly at later times for some objects).\n\nAfter this selection, some objects remained with only one or two spectra\navailable; this required a special evaluation procedure. In cases with two\nspectra, the error cannot be computed from the regression; it was therefore\ncalculated by propagating the errors of the single \nmeasurements\\footnote{Quadratic error propagation using the derivatives of the\nformula: $y_0=\\left(y_1t_2-y_2t_1\\right) \/ \\left(t_2-t_1\\right)$, where $y_0$ is the value at\n$t=0$, and $y_{1,2};t_{1,2}$ are the coordinates of the two given data points.}\ninstead. In cases with only one spectrum, the value from this spectrum is\ngiven in the diagrams, if it can be expected that the evolution between the\ndate of $B$ maximum and that of the spectrum is not too rapid. The error was then \ncalculated by adding to the error of measurement (see below) an estimate for\nthe error in the estimated time. The latter was computed as the average slope\nof the regression lines for objects belonging to the same SN group (with\n$\\geq3$ spectra available), multiplied by the time offset of the single\nspectrum relative to the day of $B$ maximum.\n\nFor the SNe 1984A and 2002dj EW measurements, the described procedure is not \nreasonable, as the spectral coverage is sparse around $B$ maximum, and the observations\nof these two objects show a particularly pronounced non-linear time evolution \nfor many EWs. The EW values and respective uncertainties were thus obtained \nby performing a quadratic polynomial fit\\footnote{Non-linear least-squares \nMarquardt--Levenberg algorithm; sign of the leading polynomial coefficient \nfixed manually after investigating data} to all available data between --10 \nand +10~d relative to $B$ maximum.\n\nRatios were calculated from the $EW(t=0)$ values obtained as described; the\nerrors attached to the ratios were computed by propagating the errors of the\nEWs quadratically.\n\n\n\\subsection{Velocity measurements}\n\nThe mean expansion velocity of a given line was determined from the blueshift\nof the absorption relative to the rest wavelength. The velocity thus derived is\nphysically meaningful only for single lines or for close multiplets that\nare sufficiently isolated from other strong features. Since most lines have a \nP Cygni shape and are blends, one has to think thoroughly about how to\nmeasure the `mean' wavelength of an absorption. We used two different methods: \nThe first is to use the gaussian fit routine\nwithin {\\sc iraf}; the second is to estimate the centre of the absorption by eye\n(taking into account problems such as the sloping continuum). The values\nobtained from several such measurements were then averaged; the respective\nstandard deviation is a crude estimate of the error introduced by the manual\nmeasurements, and was used to calculate the error in the cases mentioned above.\nFig. \\ref{MessungIllustration} shows an example of how measurements were performed.\n\n\n\\subsection{Equivalent width measurements}\n\\label{sec:EWMeasurementtechnique}\n\nAs a measure of line strength, we take the equivalent width (EW). This is \ndefined as:\n\\begin{eqnarray}\n\tEW=\\int^{\\lambda_1}_{\\lambda_2}\n\t \\frac{F_{\\rm C}(\\lambda)-F(\\lambda)}{F_{\\rm C}(\\lambda)} d\\lambda, \n\\end{eqnarray}\nwhere $F(\\lambda)$ is the flux density level in the spectrum and $F_{\\rm C}(\\lambda)$\nthe continuum flux density. EW is insensitive to multiplication \nof the flux density spectrum by a constant between $\\lambda_1$ and $\\lambda_2$. Therefore, \nif $\\lambda_1$ and $\\lambda_2$ are not too far away from each other, reddening \neffects can be neglected. Other `multiplicative errors' are also suppressed.\n\nThe EW of an absorption (or an emission) line can be measured inside {\\sc iraf} (when\nshowing spectra by \\textsc{splot}) by entering the beginning and ending wavelengths of\nthe line, as well as the continuum level at those wavelengths. The main\ndifficulty is to define the continuum and the starting and ending points for a\nfeature, especially considering that the lines have P Cygni profiles. We\nproceeded as follows (see Fig. \\ref{MessungIllustration}):\n\nSince a `real' continuum level cannot be determined in SN Ia spectra owing to\nthe multitude of line absorptions, for a single P Cygni profile we defined a\npseudo-continuum level to be the flux density level near the edges of the \nfeature, neglecting the influence the emission component has on this. \nThe edges of a feature were set roughly where the slope of the flux curve \nequals the slope of an imaginary pseudo-continuum curve joining the\nopposite sides of the line. The error involved with this procedure has no\neffect on the comparative study as long as all measurements are done\nconsistently. Some lines and absorption troughs regularly have poorly\ndeterminable or jagged edges. In these cases, care was taken that the\nmeasurements were done as homogeneously as possible for all objects.\n\nEW measurements were carried out several times (see Fig.\n\\ref{MessungIllustration}), taking into account reasonable upper and lower\nestimates of the continuum level and the starting and ending points. The\nstandard deviations thereby obtained for the EW values is again a rough estimate of\nthe errors introduced by the manual input of beginning\/ending points and the\ncontinuum level, and was further evaluated in the above-mentioned cases.\n\nThe O~{\\sc i}\\ $\\lambda7773$ line requires special attention. In most spectra, it is\ncontaminated by an atmospheric absorption that is not completely removed in the\nreduction process. In these cases, the atmospheric absorption or its residuals\nwere cut by visual judgement before measuring the EW, and the measurements were\ncarried out eliminating the contamination in different ways, so that the\nresulting standard deviation roughly represents the error introduced by the\nsnipping process. \n\n\n\\section{Expansion velocities}\n\nWe only measured the velocities of features that comply with the requirements\nabove, namely features that are due to a single ion, since single lines are not\navailable. The lines discussed below are Si~{\\sc ii}\\ $\\lambda5972,6355$, and S~{\\sc ii}\\ \n$\\lambda5640$.\\footnote{The two S~{\\sc ii}\\ trough minima basically provide the same \ninformation, so we do not discuss measurements of the $\\lambda5454$ \nminimum.} Measured values are listed in table \\ref{tab:VelocityValueTable}.\n\n\n\\subsection{Si~{\\sc ii}\\ $\\lambda6355$}\n\nThe velocities at maximum derived from this line show a big scatter \nespecially at lower $\\Delta m_{15}$ values (Fig. \\ref{fig:V_Si6355}). \nHowever, once the SNe are divided into velocity gradient groups, it \nturns out that most SNe, covering a wide range of $\\Delta m_{15}$, \nfrom 0.9 to $\\sim 1.7$, and including all LVG, some HVG, and the \nbrightest among the FAINT SNe, have a roughly\nconstant $v$(Si~{\\sc ii}\\ $\\lambda6355$), with only a small scatter ($11000 \\pm\n1000$\\,km~s$^{-1}$). HVG objects have a wide range of $v$(Si~{\\sc ii}\\ $\\lambda6355$)\nvalues, with no correlation with $\\Delta m_{15}$, and are responsible for most\nof the scatter. For FAINT SNe, $v$(Si~{\\sc ii}\\ $\\lambda6355$) goes from values\ncomparable to those of the LVG group at $\\Delta m_{15} \\sim 1.5$--$1.7$ to\nsmaller values as $\\Delta m_{15}$ increases.\n\n\n\\subsection{Si~{\\sc ii}\\ $\\lambda5972$}\n\nAlthough the velocities of this line (Fig. \\ref{fig:V_Si5972}) show the same overall tendencies as those\nderived from the $\\lambda6355$ feature, there are differences\nespecially among the HVG objects. They reach lower maximum velocities, leading\nto a smaller spread inside this group. A slight tendency to lower values (by\n$\\sim 500$\\,km~s$^{-1}$) can also be noted in every group. This is probably due to\nthe fact that the line is weaker and thus forms deeper than Si~{\\sc ii}\\\n$\\lambda6355$. The apparently larger scatter among LVG objects may be due to\nthe fact that the weak feature often shows a more complicated shape and suffers\nfrom noise, making measurements of the centroid less reliable. Also,\ncontamination from other lines may occur.\n\n\n\\subsection{S~{\\sc ii}\\ $\\lambda5640$}\n\nThis S~{\\sc ii}\\ absorption (Fig. \\ref{fig:V_S5640}) has a behaviour similar to\nthat of the Si~{\\sc ii}\\ absorptions discussed above. However, it shows significantly\nlover velocities than the Si~{\\sc ii}\\ $\\lambda6355$ feature, as can be expected\nsince the line is much weaker \\citep[see also][]{blo06}. The mean differences\nfrom the values derived from the Si~{\\sc ii}\\ line for the respective groups are as\nfollows: HVG: $\\sim 1000$--$4000$\\,km~s$^{-1}$; LVG: $\\sim 1000$\\,km~s$^{-1}$; FAINT: $\\sim\n2000$\\,km~s$^{-1}$. The spread of values for the HVGs is much smaller in $v$(S~{\\sc ii}) than\nin $v$(Si~{\\sc ii}\\ $\\lambda6355$), and also slightly smaller than that of\n$v$(Si~{\\sc ii}\\ $\\lambda5972$). \n\n\n\\section{Equivalent widths}\n\nIn this section, the measurements of the individual features are presented and \ndiscussed. Values are given in table \\ref{tab:EWValueTable}.\n\n\n\\subsection{Fe-Mg(-Ti) trough $\\sim 4300$\\,\\AA}\n\nThe EW of this feature is roughly constant for all SNe with $\\Delta m_{15}\n\\lesssim 1.8$ (Fig. \\ref{fig:EW_Fe4300}). The lack of evolution suggests\nthat Fe dominates this feature, or at least that the relative contribution of\nFe and Mg does not evolve with $\\Delta m_{15}$ in the range from 1 to 1.8. HVGs\ntend to have larger EWs than LVGs because in general they have broader and\ndeeper lines. The EW rapidly increases for FAINT SNe, which is the effect of\nTi~{\\sc ii}\\ lines becoming very strong in those coolest objects, as was the case\nfor, e.g.\\ , SN~1991bg \\citep{maz95}. \n\n\n\\subsection{Fe trough $\\sim 4800$}\n\nThe EW of this feature is essentially constant in all SNe, with a rather large\ndispersion among objects with the same decline rate (Fig.\n\\ref{fig:EW_Fe4800}). There is a slight trend to increasing values for the\nfainter SNe, possibly an effect of the lower temperature which makes the Fe~{\\sc ii}\\\nlines stronger. On the other hand, some of the peculiar bright SNe, such as\nSN~1991T, where the Fe~{\\sc iii}\/Fe~{\\sc ii}\\ ratio is large (Mazzali et~al.\\ 1995), have\nvalues comparable to other SNe, suggesting that Fe~{\\sc iii}\\ dominates this feature,\nas well as the Fe $\\sim 4300$ trough, in all LVGs. HVG SNe have again larger values \nthan LVGs, and now the trend is even\nclearer. Analogy with the FAINT SNe may suggest that the HVG SNe have a lower temperature as a\nconsequence of the higher velocity, but it may also just imply that Fe reaches\nhigher velocities in HVGs, as do S and Si.\n\nSNe~1984A and 1983G have somewhat larger values than the other SNe. This is\nprobably due to the broad-lined nature of these SNe, the SNe~Ia with the\nhighest velocities ever recorded \\citep{ben05}. The other high-velocity SN,\nSN~1997bp, could not be measured since its spectra do not extend to the blue. \n\nThere is an apparent tendency for SNe to cluster in several small groups. We\nrefrain from interpreting this as an indication of different modes of the\nexplosion, and defer this to a time when more data are available. \n\n\n\\subsection{S~{\\sc ii}\\ trough $\\lambda\\sim5454,\\sim5640$}\n\\label{sec:EW_S}\n\nThe EW of the S~{\\sc ii}\\ feature (Fig. \\ref{fig:EW_S}) shows a kind of parabolic \ntrend, with very small scatter. It has a small value for SNe with $\\Delta\nm_{15} < 1.0$. It reaches a broad maximum in all other LVG and most HVG SNe with\n$\\Delta m_{15}(B)$$\\;= 1.1$--$1.5$, and then it progressively declines at $\\Delta m_{15} >1.6$. \nThe observed drop may be explained as an effect of the insufficient population\nof the highly excited lower levels of these lines as the temperatures of the\nSNe drop. However, at the highest temperatures a reduction in the IME abundance\nis also required to reproduce the observed weakening of the lines in objects\nsuch as SN~1991T (Mazzali et~al.\\ 1995). Therefore, it is possible that a trend\nof increasing abundance going from the slowest to the intermediate decliners,\nand then decreasing abundance from there to the fastest decliners is also\npresent. The HVG SNe have a slightly larger value than the LVGs, and SN~1984A again \nstands out by having an anomalously large value.\n\n\n\\subsection{Si~{\\sc ii}\\ $\\lambda5972$}\n\\label{sec:EW_Si5972}\n\nThe EW of the weaker Si~{\\sc ii}\\ line (Fig. \\ref{fig:EW_Si5972}) correlates very \nwell with $\\Delta m_{15}$, and could therefore be used as a luminosity\nindicator just as well as the line strength ratios presented below. \nThis behaviour is at the basis of the observed relation between\n$\\mathcal{R}$(Si~{\\sc ii}) and SN luminosity \\citep{nug95}, as is illustrated \nby a plot of $\\mathcal{R}$(Si~{\\sc ii}) versus EW(Si~{\\sc ii}\\ $\\lambda5972$) (Fig. \n\\ref{fig:NugentComparison}) and by the weaker correlation of \nEW(Si~{\\sc ii}\\ $\\lambda6355$) with $\\Delta m_{15}$ (see Fig. \n\\ref{fig:EW_Si6355}). The very existence of the trend \nis puzzling, since the Si~{\\sc ii}\\ $\\lambda5972$ line originates from a\nrather highly excited level, and its strength may be expected to correlate with\ntemperature directly rather than inversely. The explanation may involve the\ncontribution of lines from other elements and may require full non-local \nthermodynamic equilibrium (NLTE) analysis.\nHVG SNe now blend in with the LVG SNe. This is somewhat surprising, since \nHVG SNe have the highest velocities (Fig. \\ref{fig:V_Si6355}). Clearly, the \nline does not become more intense in SNe where it gets faster. \n\n\n\\subsection{Si~{\\sc ii}\\ $\\lambda6355$}\n\nThis line shows a number of interesting trends (Fig. \\ref{fig:EW_Si6355}). \nFor most LVGs (with $\\Delta m_{15} < 1.6$) the EW has a tendency to increase\nslowly with increasing $\\Delta m_{15}$. At larger decline rates, where the\nFAINT SNe are, the value drops again. The two bright and peculiar LVG SNe,\n1991T and 1997br, have much smaller values. This general behaviour is similar to\nthat of the S~{\\sc ii}\\ feature, and may be understood as the effect of temperature\nand possibly of abundance: in SNe~1991T and 1999br the degree of ionisation is\nhigher than in spectroscopically normal objects, and the Si~{\\sc ii}\\ line is\naccordingly weaker, but a low abundance of the IME is also required to\nreproduce the observed spectra (Mazzali et~al.\\ 1995). The Si~{\\sc ii}\\ line is\nstrongest for intermediate decliners, where temperature reaches the optimal\nvalue for this line and IME abundance possibly reaches a peak. The line weakens\nin FAINT SNe, which are cooler and possibly have a smaller IME abundance. This\neffect is less marked than it is in the S~{\\sc ii}\\ feature, since the Si~{\\sc ii}\\\n$\\lambda6355$ originates from levels with a much smaller excitation potential\nand is less sensitive to temperature. The observed drop may therefore more\ndirectly reflect a change in the abundance of Si in near-photospheric mass\nlayers $(\\sim 10000\\,$km~s$^{-1}$) of FAINT SNe.\n\nThe behaviour of the HVG SNe, on the other hand, is extremely different: these SNe\nare located almost vertically on the plot: although they cover a smaller\nrange of $\\Delta m_{15}$ values than the LVG SNe (1.05 -- 1.5 versus 0.9 -- 1.5),\ntheir EW(Si~{\\sc ii}\\ $\\lambda6355$) spans about a factor of two in value. This may\nreflect the presence of high-velocity absorption in the Si~{\\sc ii}\\ line\n\\citep{maz05a}. SN~1984A is again the most extreme object, followed by\nSNe~1997bp and 1983G, but these objects appear to be the tip of a smooth\ndistribution. The distribution of HVGs in EW is similar to that in $v$(Si~{\\sc ii}\\\n$\\lambda6355$) (Fig. \\ref{fig:V_Si6355}). Faster lines tend to be broader and deeper.\nUnderstanding this kind of behaviour may prove to be a very important step in\nour effort to understand the systematics of SNe~Ia. \n\n\n\\subsection{O~{\\sc i}\\ $\\lambda7773$}\n\nThe EW of this line (Fig. \\ref{fig:EW_O}) tends to rise towards higher $\\Delta\nm_{15}$, but shows quite a big scatter, especially at the bright end. Here,\nthere are both objects which show a very weak O~{\\sc i}\\ feature around $B$ maximum\n\\footnote{90N is indeed missing in the plot because its O~{\\sc i}\\ line is too weak to be \nmeasured; for other missing objects, no suitable spectral data in this wavelength \nrange are available.} as well as objects\nexhibiting values $\\gtrsim90\\,\\textrm{\\AA}$. Note that these differences can be\nfound both within the HVG and LVG groups, which cover roughly the same range of\nmeasured O~{\\sc i}\\ EW values. They may partly be due to the above-mentioned \ndifficulties of measuring the O~{\\sc i}\\ line. The overall trend of higher values \nfor fainter objects is probably a temperature effect, but it may also reflect\nchanges in abundance. Among FAINT SNe, the trend appears to be reversed. This\nis possibly due to the decrease of photospheric velocities at the faint end.\n\n\n\\section{Line strength ratios}\n\nIn this section we discuss selected ratios of EW. We focus on ratios that are\nuseful indicators of $\\Delta m_{15}$, and on ratios that bear particular\nphysical significance because they involve elements that are synthesised in\ndifferent parts of the exploding white dwarf. The discussed ratio values\nare given in table \\ref{tab:RatioValueTable}.\n\n\n\\subsection{$\\mathfrak{R}$(Si~{\\sc ii}) (Si~{\\sc ii}\\ $\\lambda5972$ versus Si~{\\sc ii}\\ $\\lambda6355$)}\n\nOur measurement is similar to the $\\mathcal{R}$(Si~{\\sc ii}) value of \\citet{nug95},\nbut it differs from it since we use the EW. The EW ratio of the two Si~{\\sc ii}\\\nlines follows the trend found by \\citet{nug95} of increasing $\\mathcal{R}$(Si\nII) with increasing $\\Delta m_{15}$ (Fig. \\ref{fig:R_Si5972_Si6355}).\nHowever, as noted in \\citet{ben05}, the scatter at the bright end is larger. \nAs we noted above, the observed behaviour is mainly caused by the\nunexplained linear increase of the Si~{\\sc ii}\\ $\\lambda5972$ line strength for\nincreasing $\\Delta m_{15}$.\n\n\n\\subsection{Fe-Mg(-Ti) trough $\\sim 4300$ versus Fe trough $\\sim 4800$}\n\nThe ratio of the EWs of these two broad absorption troughs (Fig. \n\\ref{fig:R_Fe4300_Fe4800}) is fairly constant for $\\Delta m_{15} \\leq 1.8$. \nSome of the FAINT SNe (1991bg, 1999by, and 2005bl) have much larger values. \nThe rise at the faint end is clearly due to the appearance of Ti~{\\sc ii}\\ lines \nin the 4300-\\AA\\ feature at low temperature. \n\n\n\\subsection{S~{\\sc ii}\\ $\\lambda\\sim5454,\\sim5640$ versus Si~{\\sc ii}\\ $\\lambda6355$}\n\nThis value correlates very well with $\\Delta m_{15}$ for FAINT objects (Fig.\n\\ref{fig:R_S_Si6355}). The LVG SNe also correlate reasonably well with $\\Delta\nm_{15}$, with a scatter of $\\sim 10$ per cent, but the HVG SNe do not. The average values\nof the HVG and the LVG group are very different. The HVG SNe show again an almost\nvertical behaviour, as they did in both the $v$(Si~{\\sc ii}\\ $\\lambda6355$) and\nthe EW(Si~{\\sc ii}\\ $\\lambda6355$) plot. Since EW(Si~{\\sc ii}\\ $\\lambda6355$) is affected,\nthe ratio is smaller for these SNe. For fainter objects, the behaviour mainly\nseems to reflect the above-mentioned (see section \\ref{sec:EW_S}) changes of\nionization structure with decreasing temperature: the S~{\\sc ii}\\ line strength\ndecreases rapidly as $\\Delta m_{15}$ increases, which is not as much the case\nfor the Si line.\n\n\n\\subsection{$\\mathfrak{R}$(S,Si) (S~{\\sc ii}\\ $\\lambda\\sim5454,\\sim5640$ versus Si~{\\sc ii}\\ $\\lambda5972$)}\n\nThis ratio correlates well with $\\Delta m_{15}$ for almost all objects,\nregardless of their group (Fig. \\ref{fig:R_S_Si5972}). It decreases almost \nlinearly with increasing $\\Delta m_{15}$, and is thus as suitable as \n$\\mathfrak{R}$(Si~{\\sc ii}) as a spectroscopic luminosity indicator. The trend for a \nsmaller ratio with increasing $\\Delta m_{15}$ was already present in the\nprevious `S\/Si' ratio, but here the scatter is much reduced and both LVG and\nHVG objects follow the correlation, the differences between the two groups\nbeing apparently suppressed. These weaker lines are in fact less affected than\nSi~{\\sc ii}\\ $\\lambda6355$ by the high velocities and the ensuing increased strength,\nas shown in the EW plots (Fig. \\ref{fig:EW_S} and \\ref{fig:EW_Si5972}). Even\nSN~1984A follows the general trend: once ratios are taken its large EW values\ncancel out. We cannot, however, draw any conclusions about Si distribution,\nvelocities, etc. from measurements involving the Si~{\\sc ii}$\\,\\lambda5972$ feature,\nbecause the behaviour of this line is not well understood, as discussed above. \n\n\n\\subsection{Si~{\\sc ii}\\ $\\lambda6355$ versus Fe trough $\\sim4800$}\n\nThe plot of this ratio (see Fig. \\ref{fig:R_Si6355_Fe4800}) is \nvery interesting, as is its possible meaning, which is discussed \nbelow. The ratio exhibits a `quadratic' behaviour: The values are \nsmall at small $\\Delta m_{15}$, they increase until they reach a \npeak at $\\Delta m_{15} \\sim 1.1$--$1.5$ and then they drop again \nfor very faint SNe such as 1991bg, 1997cn and 1999by. The\nbehaviour reflects that of EW(Si~{\\sc ii}\\ $\\lambda6355$) but is highly enhanced,\nsuggesting that we are seeing more than just the effect of temperature. The HVG SNe\nblend in with the other SNe, although they have larger values of both EW(Si~{\\sc ii}\\\n$\\lambda6355$) and EW(Fe $\\sim4800$). \n\n\n\\subsection{S~{\\sc ii}\\ $\\lambda\\sim5454,\\sim5640$ versus Fe trough $\\sim4800$}\n\nThis ratio behaves like the previous one (Fig. \\ref{fig:R_S_Fe4800}), as\ncould be expected since both Si and S are IME. The FAINT SNe now reach very\nsmall values, presumably because of the higher temperature sensitivity of the\nS~{\\sc ii}\\ feature than the Si~{\\sc ii}\\ $\\lambda6355$ line. \n\nIt is tempting to interpret the behaviour of this ratio and the one above as\ndue not only to temperature, but also to a trend for the brightest SNe to have\na higher abundance of Fe relative to IME in layers near the photosphere at\nmaximum ($v \\sim 10000$\\,km~s$^{-1}$). This is plausible since Fe~{\\sc ii}\\ and Si~{\\sc ii}\\ have\nsimilar ionisation potentials, and should respond similarly to changes in \ntemperature. The observed behaviour may indicate that bright SNe burn more to \nnuclear statistical equilibrium (NSE) ($\\sim 20$ per cent of $^{56}$Ni\\ \nhas decayed to $^{56}$Fe\\ at the time of maximum). The\ndrop of the ratio at the largest $\\Delta m_{15}$ values may then be due to the\nfact that now the IME abundance is beginning to decrease in the mass layers\nnear $v_{\\mathrm{ph}}$, after reaching a peak at $\\Delta m_{15} \\sim 1.1$--$1.5$. \n\nNote that $v_{\\mathrm{ph}}$ is smaller at larger $\\Delta m_{15}$. This implies a lower\nopacity, which in turn could be associated with a smaller Fe-group abundance\nrelative to IME in the layers between 9000 and 11000\\,km~s$^{-1}$, that is between the\nphotosphere of FAINT SNe and that of the other objects. This would suggest that\nthe FAINT SNe produce less NSE material, as is expected both from their dimness\nand their narrow light curves. The difference between FAINT SNe and brighter\nones would be in the degree of burning to NSE at velocities $\\sim 10000$\\,km~s$^{-1}$,\nas hypothesised in various models \\citep[e.g.\\ ][]{iwa99}. Burning to IME may also\nextend to lower velocities in FAINT SNe than in brighter ones. \n\n\n\\subsection{$\\mathfrak{R}$(Si,Fe) (Si~{\\sc ii}\\ $\\lambda5972$ versus Fe trough $\\sim4800$)}\n\nThis ratio, unlike the previous one, shows an almost constantly rising trend. Over a\nlarge range of $\\Delta m_{15}$ values, it increases almost linearly with $\\Delta m_{15}$ \n(Fig. \\ref{fig:R_Si5972_Fe4800}). This ratio is suitable as a luminosity indicator.\n\nAs for a possible explanation of the observed trend, it appears that the ratio\nis driven by the increasing strength of the Si~{\\sc ii}\\ feature with increasing\n$\\Delta m_{15}$, which is not explained as discussed above. \n\n\n\\subsection{O~{\\sc i}\\ $\\lambda7773$ versus Si~{\\sc ii}\\ $\\lambda6355$}\n\nThis ratio was calculated in order to investigate the relation between O and \nIME abundance. As we showed above, both EW(Si~{\\sc ii}\\ $\\lambda6355$) and EW(S~{\\sc ii}) \ndecrease at $\\Delta m_{15} > 1.5$. If this implies less burning even to IME \nin the faintest SNe, we might expect O\/IME ratios to increase in those objects.\n\nThe ratio of O~{\\sc i}\\ $\\lambda7773$ and Si~{\\sc ii}\\ $\\lambda6355$ shows indeed a slight trend to\nrise with $\\Delta m_{15}$ (Fig. \\ref{fig:R_O_Si6355}), but this is superimposed\nby a large spread in values of $\\gtrsim 25$ per cent at almost every $\\Delta m_{15}$ value.\nNote again that the difficulty in measuring the O~{\\sc i}\\ line may affect our results.\n\n\n\n\\subsection{O~{\\sc i}\\ $\\lambda7773$ versus S~{\\sc ii}\\ $\\lambda\\sim5454,\\sim5640$}\n\nThe S~{\\sc ii}\\ line tracks the photosphere more accurately than Si~{\\sc ii}\\\n$\\lambda6355$. This ratio shows tendency to increase with increasing\n$\\Delta m_{15}$ (Fig. \\ref{fig:R_O_S}), which is enhanced for \n$\\Delta m_{15}\\gtrsim1.5$. While the decrease in S~{\\sc ii}\\ line strength for large \n$\\Delta m_{15}$ (Fig. \\ref{fig:EW_Fe4800}) certainly drives the latter trend, and the \nrise in O~{\\sc i}\\ EW causes the tendency for $\\Delta m_{15}\\lesssim1.5$, how much all of \nthis is due to decreasing IME abundance compared to oxygen is unclear. \n\n\n\\subsection{O~{\\sc i}\\ $\\lambda7773$ versus Fe trough $\\sim4800$}\n\nThis ratio -- though exhibiting significant scatter especially at low $\\Delta\nm_{15}$ -- shows a clear trend to increase for $\\Delta m_{15}\\lesssim1.5$ (Fig. \n\\ref{fig:R_O_Fe4800}). This\ncan be understood by considering the tendency of the O~{\\sc i}\\ EW to rise and the\nbehaviour of the Fe $\\sim4800$ trough EW, which is essentially flat.\nInterestingly, for the faintest objects, an almost linear drop can be observed.\n\n\n\\section{Discussion}\n\nIn this section we briefly discuss the possible implications of the various\nmeasurements. \n\n\n\\subsection{Photospheric velocities}\n\nNear maximum, all LVG, some HVG and some FAINT SNe have a very similar Si~{\\sc ii}\\\nvelocity, $\\sim 11000$\\,km~s$^{-1}$\\ (Fig. \\ref{fig:V_Si6355}). This can be taken to imply that there\nis significant nuclear burning (at least to IME) in all these objects,\nirrespective of their brightness. As we know, $\\Delta m_{15}$ depends mostly on\nthe amount of NSE material synthesised \\citep[][and references therein]{maz01},\nwhile the kinetic energy (KE) depends also on burning to IME \n\\citep{gam05}. Therefore, all LVG SNe \nmay have a similar KE. The faintest SNe have a lower $v$(Si~{\\sc ii} $\\lambda6355$),\n$\\sim 9000$--$10000$\\,km~s$^{-1}$. This suggests that there may be less total burning, not\njust less burning to NSE, and thus possibly less KE, in these SNe. \n\nAs for HVG SNe, it is interesting to check whether the observed high velocity is\nrelated to the presence of high-velocity features \\citep[HVFs,][]{maz05b}. These\nare high-velocity absorptions observed mostly in the Ca~{\\sc ii}\\ IR triplet in the\nspectra of almost all SNe~Ia earlier than 1 week before maximum. The high\nvelocities measured for HVGs here may be the result of blending of Si~{\\sc ii}\\ and\nS~{\\sc ii}\\ HVFs with the lower velocity photospheric lines. Indeed, Si~{\\sc ii}\\ HVFs are\ninferred at earlier times in several SNe, but never seen detached from the\nmain, photospheric component \\citep{maz05a}. Interestingly, no correlation\nbetween pre-maximum HVFs and IME velocity at maximum is found: the six SNe that\nare common to this study and \\citet{maz05b} divide evenly among the HVG (SNe \n2002bo, 2002dj, 2002er) and LVG (SNe 2001el, 2003du, 2003kf) groups. Furthermore,\nwhile all these SNe have prominent HVFs in the Ca~{\\sc ii}\\ IR triplet about one week\nbefore maximum or earlier, it is actually the LVG SNe among them that retain strong\nCa~{\\sc ii}\\ HVFs at about maximum \\citep[][Table 3]{maz05b}. \n\nIt is reasonable to expect that detached HVFs should behave similarly, whether they\noccur in Ca~{\\sc ii}\\ or Si~{\\sc ii}\\ (or S~{\\sc ii}). Therefore, the rapid decrease of the HVF\nstrength in HVGs may be behind the rapid drop in the Si~{\\sc ii}\\ velocity, if Si~{\\sc ii}\\\nHVFs are not resolved. However, this leaves us with an apparent contradiction: on\nthe one hand, the LVG SNe have the longer-lasting HVFs , but on the other the \nHVG SNe still have the highest Si~{\\sc ii}\\ velocities at maximum. \nTaken individually, both of \nthese behaviours could be understood in the frame of a scenario where HVFs\ndetermine the line velocities, but the fact that they occur together is\ndifficult to accommodate. HVFs may be due to asymmetries in the ejection, or to\ninteraction with circumstellar material, while the velocity at maximum more\nlikely reflects global properties of the explosion. \n\nThe S~{\\sc ii}\\ velocity behaves like the Si~{\\sc ii}\\ velocity (Fig. \\ref{fig:V_S5640}). This line is\nweaker than the Si~{\\sc ii}\\ line, and therefore it is a better tracer of the\nphotosphere. The S~{\\sc ii}\\ velocity plot shows that the photosphere moves to\nprogressively lower velocities for increasing $\\Delta m_{15}$. This is again to\nbe expected, since $v_{\\mathrm{ph}}$ depends on both density and opacity. While the\ndensity may be the same, the temperature is lower in fainter SNe, so $v_{\\mathrm{ph}}$\nmay also be lower. The presence of S at $v \\sim 7000$\\,km~s$^{-1}$\\ confirms that\nthe $^{56}$Ni\\ production is small in the faster decliners. Small values for the\nfaintest SNe may also suggest a possibly smaller KE, or even a smaller mass. As\nfor HVGs, they may again be affected by line broadening, although clear S~{\\sc ii}\\\nHVFs have never been observed. The effect is indeed smaller than seen in the\nSi~{\\sc ii}\\ line, but the riddle mentioned above still stands. \n\n\n\\subsection{Spectroscopic luminosity indicators}\n\nBesides $\\mathfrak{R}$(Si~{\\sc ii}), two other line strength ratios correlate\nparticularly well with $\\Delta m_{15}$: S~{\\sc ii}\\ versus Si~{\\sc ii}\\ $\\lambda5972$ \n[$\\mathfrak{R}$(S,Si), Fig. \\ref{fig:R_S_Si5972}] and Si~{\\sc ii}\\ $\\lambda5972$ \nversus Fe $\\sim4800$ [$\\mathfrak{R}$(Si,Fe), Fig. \n\\ref{fig:R_Si5972_Fe4800}]. All correlations involve the mysterious \nSi~{\\sc ii}\\ $\\lambda5972$ line, whose EW is at least as well -- if not \nbetter -- correlated with $\\Delta m_{15}$ than the ratios, especially \nat high values of $\\Delta m_{15}$. Parameters of least \nsquare fits for the respective functions $\\Delta m_{15}(\\mathrm{ratio}|\\mathrm{EW})$ can be\nfound in Table \\ref{tab:leastsquarefittable}; the regression lines are also\nshown in the respective diagrams. These linear regressions have been calculated \nover the whole SN Ia variety and not only over normal SN Ia as in \\citet{bon05}.\n\n\n\\subsection{IME ratio differences between HVG and LVG objects}\n\nThe main difference between HVG and LVG objects, leading to the separation in a\nhyerarchical cluster analysis, is the velocity development of the Si~{\\sc ii}\\\n$\\lambda6355$ line after maximum. The parameter $\\langle\\dot{v}\\rangle$ seems to be \nrelated to the diversity of SNe Ia beyond the differences described by\n$\\Delta m_{15}$. HVG objects with the same $\\Delta m_{15}$ exhibit a wide range\nof IME velocities (Figs \\ref{fig:V_Si6355} and \\ref{fig:V_S5640}), \nEW(Si~{\\sc ii}\\ $\\lambda6355$) (Fig. \\ref{fig:EW_Si6355}), and\nof the ratio EW(S~{\\sc ii}) versus\\ EW(Si~{\\sc ii}\\ $\\lambda6355$) (Fig. \\ref{fig:R_S_Si6355}). While the\nspread of velocities could be explained by the presence of IME at different\ndepths in HVGs, the variation in the ratio EW(S~{\\sc ii})\/EW(Si~{\\sc ii}\\ $\\lambda6355$) is\ndue to the fact that only the Si~{\\sc ii}\\ $\\lambda6355$ line has a wide range of EW\nfor the HVG SNe. \n\n\n\\subsection{Fe and O versus IME line strength ratios}\n\nThe line strengths around maximum give the following picture (Si conclusions\nare always derived from the Si~{\\sc ii}\\ $\\lambda6355$ line, as mentioned above):\nbrighter objects tend to contain less oxygen at the velocities probed by the\nspectra near maximum (Fig. \\ref{fig:EW_O}). Intermediate decliners contain more silicon\nand less Fe than slow decliners (Fig. \\ref{fig:R_Si6355_Fe4800}). Thus, the photosphere at maximum\nis deeper in the Fe layer for the slow decliners, while it still inside the Si\nlayer for the intermediate decliners. However, $v_{\\mathrm{ph}}$ for these two groups is\npractically the same, at least within LVG objects, as shown by the $v$(Si) and\n$v$(S) plots (Fig. \\ref{fig:V_Si6355} and \\ref{fig:V_S5640}). This implies that burning to NSE extends to\nouter layers in the slow decliners. Very faint objects contain more unburned\nor partially burned material (i.e. oxygen), probably at the expense of IME\n\\citep[see also][]{hof02}. This is suggested not only by the ratio EW(O~{\\sc i}\\\n$\\lambda7773$)\/EW(S~{\\sc ii}) (Fig. \\ref{fig:R_O_S}), but also by the decline of the equivalent\nwidths of the Si~{\\sc ii}\\ and S~{\\sc ii}\\ lines (Figs \\ref{fig:EW_Si6355} and \\ref{fig:EW_S}). Since the photosphere,\nas traced by the S~{\\sc ii}\\ line, is deeper as $\\Delta m_{15}$ increases, this may\nsuggest that the faster decliners have less overall burning.\n\n\\section{Conclusions}\n\nWe have systematically measured the velocities and EW of a number of spectral\nfeatures in SNe~Ia around maximum. The SNe have been grouped according to their\nvelocity gradient \\citep{ben05}, and we examined different EW ratios searching for\nsystematic trends and for possible hints to the general character of SN~Ia\nexplosions. Our results can be summarised as follows. \n\nThe photospheric velocity, as indicated by Si~{\\sc ii}\\ and S~{\\sc ii}\\ lines, is\napproximately constant for all LVG SNe with $\\Delta m_{15} < 1.6$. The value\ndeclines at larger $\\Delta m_{15}$. HVG SNe are found in a limited range of \n$\\Delta m_{15}$, but their velocities are highly variable. \n\nThe EW of the Fe-dominated features are approximately constant for all SNe.\nThose of IME lines are highest for $\\Delta m_{15} \\approx 1.1$--$1.5$ and are\nsmaller for the brightest and the faintest SNe. HVG SNe have on average larger\nvalues, in particular for Si~{\\sc ii}\\ $\\lambda6355$. The O~{\\sc i}\\ $\\lambda7773$ line is\nparticularly strong in the fainter SNe, and tends to get weaker with increasing\nluminosity.\n\nThree EW ratios are good indicators of $\\Delta m_{15}$: \n$\\mathfrak{R}($Si~{\\sc ii}$)$ [EW(Si~{\\sc ii}\\ $\\lambda5972$)\/EW(Si~{\\sc ii}\\ $\\lambda6355$), \nsimilar to $\\mathcal{R}$(Si~{\\sc ii}) in \\citet{nug95}], \n$\\mathfrak{R}(\\mathrm{Si,S})$ [EW(Si~{\\sc ii}\\ $\\lambda5972$)\/EW(S~{\\sc ii})], \n$\\mathfrak{R}(\\mathrm{Fe,Si})$ [EW(Si~{\\sc ii}\\ $\\lambda5972$)\/EW(Fe trough $\\sim4800$)]. \nAll three ratios are driven by the EW of the Si~{\\sc ii}\\ $\\lambda5972$ line, which itself might \nthus be the best spectroscopic luminosity indicator. Its behaviour and\nidentification are, however, not well understood; these relations are therefore only\nempirical. \n\nThe ratios of EW(Si~{\\sc ii}\\ $\\lambda6355$) and EW(S~{\\sc ii}) to EW(Fe trough\n$\\sim4800$) (Fig. \\ref{fig:R_Si6355_Fe4800} and \\ref{fig:R_S_Fe4800}) show \na parabolic behaviour: they are small at\nsmall $\\Delta m_{15}$, reach a peak at $\\Delta m_{15} \\approx 1.1$--$1.5$, and\nthen decline. While for the S~{\\sc ii}\\ line part of this behaviour could be\nexplained as the effect of increasing temperature, the Si\/Fe trend may reflect\nan abundance change. The brightest SNe have more Fe near the maximum-light\nphotosphere ($\\sim 10000$\\,km~s$^{-1}$). Intermediate decliners have more IME and less\nFe at a similar velocity. Faint SNe have a deeper photosphere, indicating both\nless $^{56}$Ni\\ and Fe-group elements, and also less IME, suggesting that burning\nwas overall reduced. This is apprently confirmed by high O~{\\sc i}\\ EW values\nfor faint SNe. \n\nHVG SNe have the fastest and strongest IME lines. This is, however, not correlated\nwith the presence of Ca~{\\sc ii}\\ HVFs. Actually, SNe with the strongest, longer\nlasting Ca~{\\sc ii}\\ HVFs are LVGs. Longer lasting HVFs may slow down the velocity\ndecline, but this does not explain why among the SNe with HVFs the LVG SNe have the\nlower velocities.\n\nOur results are based on empirical measurements. It would be important to test\ntheir implications using models. This is made complicated by the uncertainties\nin the details of the abundance and density distributions, which can affect\nmodel results. We will attempt to do this in a future work. \n\n\n\\section*{ACKNOWLEDGEMENTS}\nThis work is supported in part by the European Community's Human \nPotential Programme under contract HPRN-CT-2002-00303, `The Physics \nof Type Ia Supernovae'. We wish to thank R. Kotak, A. Pastorello, \nG. Pignata, M. Salvo and V. Stanishev from the RTN as well as \neverybody else who provided us with -- partially unpublished -- \nspectra. SH would furthermore like to thank everybody who supported \nthis work at MPA. We have made use of the NASA\/IPAC Extragalactic \nDatabase (NED, operated by the Jet Propulsion Laboratory, California \nInstitute of Technology, under contract with the National Aeronautics \nand Space Administration), and the Lyon-Meudon Extragalactic Database \n(LEDA, supplied by the LEDA team at the Centre de Recherche \nAstronomique de Lyon, Observatoire de Lyon), as well as the \\textsc{iraf} \n(Image Reduction and Analysis Facility) software, distributed by the \nNational Optical Astronomy Observatory (operated by AURA, Inc., under \ncontract with the National Science Foundation), see \n\\href{http:\/\/iraf.noao.edu}{http:\/\/iraf.noao.edu}.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\n\nUntil comparatively recently the classification of physical phases relied primarily on the\nparadigm of spontaneously broken symmetries introduced by Landau. Over the last decades though this scheme\nwas complemented by the concept of topological phases. In the latter the symmetries are preserved\nbut locally similar states can have different global properties,\nassociated for quantum systems typically with some twists in the wave functions that manifest themselves\nonly when considering the full ensemble of eigenstates.\nThe preservation of symmetries remains indeed a key feature of the topological phase classification\nas it is on the basis of the existence of symmetry protected, gapped states appearing on entrance\nto such phases \\cite{Chiu2016}. Such protected states have resulted in a significant body of continually evolving\nresearch with broad and novel potential applications including facilitating the possibility of\ntopological quantum computing \\cite{Pachos2012}.\n\nThe universality of the symmetry concept allows quite broadly a characterization of the topological properties\nto be made in terms of effective Hamiltonians capturing the generic physics in the vicinity of\npoints in the Brillouin zone that remain invariant under the specific symmetry operations.\nTopological phase transitions are characterized there by gap closures and reopenings, for instance by band\ninversion upon tuning of some control parameter.\nMost prominent is the invariance under time-reversal symmetry, and in combination with chiral and\nparity symmetry this has led to the topological classification table known as the ten-fold way \\cite{Schnyder2008,Schnyder2009,Kitaev2009,Ryu2010}.\n\nThis type of classification is limited to no or weak interactions though, and strong interactions may\nlead to additional phases with intriguing properties.\nIt is a matter of ongoing research to identify and classify such phases where a broader toolkit is required\nbeyond the symmetry classification of weakly interacting Hamiltonians \\cite{GuWen2009, GuWen2012, Kitaev2011}.\nOne such tool is the classification based upon Green's functions \\cite{Volovik,Gurarie2011,Wang2012a,Wang2012b,Wang2012c,Wang2013,Rachel2018},\nwhich is able to replicate the\nsuccess of weakly interacting classifications, whilst allowing the possibility of more readily incorporating strongly interacting phases.\n\nAn interesting characteristic arising in a clear way from the Green's function based classification is that topological\nphase transitions can arise not only through gap closures at high symmetry points. A topological phase\ntransition is bound to the generation of topological defects in some global property of the wave functions or\nthe Hamiltonian when probed over the support of the system's spectrum. The appearance or vanishing of defects requires a singular\nbehaviour. This is conventionally expressed through the gap closing of the Hamiltonian, corresponding\nfor the Green's functions to a merger of poles. But it is also possible in the absence\nof a gap closure by the merging of zeros of the Green's function \\cite{Gurarie2011,Volovik}, or the merging\nof a zero and a pole.\nAs the latter is unlikely to occur in the absence of strong interactions it is not ordinarily\nconsidered. Examples of this phenomenon are thus of significant fundamental interest to better understand\nthe nature of topological phases broadly. One aspect of this paper is to reveal how such an example can be\nextracted from a weakly interacting system with a partially broken spatial translation symmetry.\nThis results from the necessity of reconsidering how to obtain the topological classification in\nsuch a system, which comprises the other results of this paper.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=\\columnwidth]{figure1}\n\t\\caption{\\label{Setup}Schematic representation of the continuous, spiral-ordered line of magnetic moments at $y=0$,\n\t\tperiodic in $\\pi\/k_m$ along the $x$ direction. The yellow tape represents the modification of the local\n\t\tdensity of states and thus reflects the spatial extent of the subgap wave functions.}\n\\end{figure}\n\nWithin this work, we build on the model and on key results developed in Ref.\\ \\cite{PartI}, henceforth called \\emph{Part I},\nfor the system shown in Fig.\\ \\ref{Setup}, a chain of densely packed magnetic scatterers embedded in a two dimensional (2D)\nsuperconducting substrate. We show that the importance of the spatial structure of the subgap states over all wavelengths\nemphasized in Part I has a direct impact on the topological properties too, and we develop a transparent topological\nclassification which accounts for the lack of translation symmetry.\nWe indeed demonstrate that although the subgap states are confined near the interface and form\none-dimensional (1D) bands it is not straightforward to eliminate the transverse spatial degree of freedom to\nbe able to use the established 1D topological classification methods. We in fact provide a rigorous proof\nbased on Choi's theorem \\cite{Choi1975,Stinespring1955} that the often used convenient method of tracing out the\ntransverse spatial degrees of freedom to obtain an effective 1D Hamiltonian is valid only for fully separable\nwave functions. This condition is met, for instance, for confined edge states of quantum Hall systems or topological insulators\nfor which this elimination is thus applicable. It is, however, not met in the present case, and an uncritical\napplication of such a method would lead to an incorrect topological classification.\n\nTo cure this problem we make use of the full\nspatial information of the exact Green's function provided through Part I. The latter comprises in particular the long\nspatial extent of the wave functions created from scattering on the magnetic impurities, emphasized earlier\nfor the long range of Yu-Shiba-Rusinov (YSR) states \\cite{Menard2015,Menard2017}.\nWe introduce a family of spatially varying\ntopological Hamiltonians that through the standard 1D classification methods provide at the impurity chain\nthe correct topological invariants, but\nalso incorporate the transition to the topologically trivial regions of the superconductor at large\ndistances to the chain. By smoothly varying the distance from the chain, the thus obtained family of topological invariants displays\nnovel exit and re-entrance into a topologically nontrivial phase due to the interplay between poles and\nzeros of the underlying Green's function. This phenomenon occurs in a weakly interacting system and appears to\nbe entirely due to geometric, interference based considerations. This adds a property to the densely packed\nmagnetic scatterers that is different to dilute chains of YSR states that can receive a more conventional\n1D topological classification which has been amply investigated in the literature\n\\cite{Choy2011,Kjaergaard2012,NadjPerge2013,vonOppen2013,vonOppen2014,Ojanen2014,Rontynen2014,Lutchyn2014,Kotetes2014,Glazman2014,Franz2014,Heimes2015,Ojanen2015,Brydon2015,Schecter2015,Singh2015,Flensberg2016,Schecter2016,Poyhonen2016,Braunecker2013,Loss2013,Vazifeh2013,Braunecker2015,Peng2015},\nstarting from the basic phenomenology of YSR states \\cite{Yu1965,Shiba1968,Rusinov1969}.\nThe importance put forward in Part I to determine the exact form of the Green's function of the superconductor with the\nmagnetic impurity chain becomes essential here.\nIndeed we show that only in the domains where the often used long wavelength approximation (LWA) is applicable the use of the\nconventional 1D classification methods followed from tracing out the spatial degrees of freedom remains valid.\nAs the LWA is the extrapolation of tightly packed YSR states this confirms the applicability of the used topological classification.\nBut it also tells that another approach such as used here is necessary when the LWA no longer applies which, as discussed in Part I,\nis in a topologically most interesting range of spiral magnetic order.\nDensely packed chains have been realized in experiment and show indeed a more complex band structure\nthan expected from a simple YSR picture. For such systems the proposed augmented classification method should be directly applicable.\n\nThe further structure of the paper is the following.\nIn Sec.\\ \\ref{sec:model} we summarize the model and the main result for the Green's function obtained in Part I.\nIn Sec.\\ \\ref{sec:1D} we introduce the concept of topological Hamiltonians that will form the basis\nfor the further discussion. In Sec.\\ \\ref{sec:comparison_to_1D} we recall the essentials of the topological classification\nof the corresponding 1D system. Section \\ref{sec:localized_classification} contains the core of this work with the\ntopological classification tailored to account for the 2D structure of the system. We conclude in Sec.\\ \\ref{sec:conclusions}.\nThe analytical results are complemented by a numerical verification based on the tight-binding model\nalready described in Part I. In the Appendix we discuss the extension to the numerics\nfor the topological classification.\n\n\n\n\\section{Model and Green's functions}\\label{sec:model}\n\nThe model and its properties have been laid out in detail in Part I, and we therefore provide here\nonly a high-level summary of its main features. We set $\\hbar=1$ throughout.\nThe 2D superconductor is described by the Hamiltonian\n\\begin{equation} \\label{eq:H_0}\n\tH_0 = \\sum_{\\mathbf{k},\\sigma} \\epsilon_{\\mathbf{k}} c^\\dagger_{\\mathbf{k},\\sigma} c_{\\mathbf{k},\\sigma}\n\t+\\bigl( \\Delta c_{-\\mathbf{k},\\downarrow}c_{\\mathbf{k},\\uparrow} + \\text{h.c.} \\bigr).\n\\end{equation}\nHere $c_{\\mathbf{k},\\sigma}$ are the electron operators for momenta $\\mathbf{k}=(k_x,k_y)$ and spins $\\sigma=\\uparrow,\\downarrow = +,-$.\nThe dispersion $\\epsilon_{\\mathbf{k}} = (k_x^2+k_y^2-k_F^2)\/2m$ has effective mass $m$ and Fermi momentum $k_F$, and\n$\\Delta$ is the $s$-wave bulk gap. Spatial coordinates are denoted by $(x,y)$.\nThe dense chain of classical moments is placed at position $y=0$ and runs along $x$. It scatters electrons\nthrough the Hamiltonian\n\\begin{equation} \\label{eq:H_m}\n\tH_m = V_m \\int dx \\, \\mathbf{M}(x) \\cdot \\mathbf{S}(x,y=0),\n\\end{equation}\nwith scattering strength $V_m$,\nelectron spin operator $\\mathbf{S}(x,y)$,\nand the planar magnetic spiral formed by the classical spins\n$\\mathbf{M}(x) = \\cos(2 k_m x) \\hat{\\mathbf{e}}_1+\\sin(2 k_m x) \\hat{\\mathbf{e}}_2$. In the latter expression\nthe parameter $k_m$\nexpresses the spiral's periodicity of wavelength $\\pi\/k_m$ and $\\hat{\\mathbf{e}}_{1,2}$ are arbitrary orthogonal vectors.\nAlthough self-ordering mechanisms can lead to specific spiral periods \\cite{Braunecker2009a, Braunecker2009b,Braunecker2013,Loss2013,Vazifeh2013,Schecter2015,Singh2015,Braunecker2015,Hsu2016}, here\nwe keep $k_m$ as a free tuning parameter.\n\nThe $k_x$ momentum transfer of $2k_m$ by scattering on $H_m$ can be compensated by choosing the spin quantization axis\nperpendicular to $\\hat{\\mathbf{e}}_{1,2}$ and considering the gauge transformation\n$c_{\\mathbf{k},\\sigma} \\to \\tilde{c}_{\\mathbf{k},\\sigma} = c_{(k_x - \\sigma k_m, k_y),\\sigma}$ \\cite{Braunecker2010}.\nIn this new basis $\\mathbf{M}(x) \\equiv \\hat{\\mathbf{e}}_1$ so that $H_m$ corresponds to a ferromagnetic chain of\nscattering strength $V_m$ applied perpendicular to the spin quantization axis.\nAs the transformation also shifts the dispersions $\\epsilon_{\\mathbf{k},\\sigma} \\to \\epsilon_{(k_x+\\sigma k_m,k_y)}$\nthe dispersions of the subgap bands created from scattering on $H_m$ also depend sensitively on $k_m$, and indeed\nthe spin-dependent shifts are equivalent to a uni-axial spin-orbit interaction \\cite{Braunecker2010}.\n\nIn the gauge transformed basis translational symmetry along $x$ is restored, and the problem is solved\nin a mixed momentum and real space description in the variables $(k_x,y)$. Since $H_m$ induces spin-flip scattering\nan extended Nambu-spin basis is required which we choose as\n\\begin{equation} \\label{eq:basis}\n\t(\\tilde{c}^\\dagger_{\\mathbf{k},\\uparrow}, \\tilde{c}^\\dagger_{\\mathbf{k},\\downarrow}, \\tilde{c}_{-\\mathbf{k},\\downarrow}, \\tilde{c}_{-\\mathbf{k},\\uparrow}),\n\\end{equation}\nwith the restriction $k_x\\ge 0$ to avoid double counting of states.\nNotice that this basis is expressed in the gauge transformed operators,\nand does not have the minus sign that is used e.g.\\ in front of $\\tilde{c}_{-\\mathbf{k},\\uparrow}$ in parts of the literature.\nThe Pauli matrices acting in Nambu space will be denoted by $\\tau_\\alpha$ and those acting in spin space by $\\sigma_\\alpha$,\nfor $\\alpha = x,y,z$. We include furthermore with $\\tau_0$ and $\\sigma_0$ the corresponding unit matrices.\n\nThe system properties are characterized through the retarded Green's function in Nambu-spin space, which for the full\nsystem takes the form\n\\begin{align}\n\tG(\\omega,k_x,y,y') = \\; &g(\\omega,k_x,y-y') \\nonumber\\\\\n\t+ g(\\omega,&k_x,y) T(\\omega,k_x) g(\\omega,k_x,-y'),\n\\label{eq:G}\n\\end{align}\nwhere the $T$ matrix is given by the $(\\omega_n,k_x)$ dependent matrix\n\\begin{equation} \\label{eq:T}\n\tT(\\omega,k_x) = \\bigl[ (V_m \\tau_z \\sigma_x)^{-1}-g(\\omega,k_x,0) \\bigr]^{-1},\n\\end{equation}\nand $g(\\omega,k_x,y)$ is the bulk Green's function in the absence of $H_m$. For the present model the latter has the\nexact solution\n\\begin{align}\n\\label{eq:g_k_y}\n\t&g(\\omega,k_x,y)\n\t=\\sum_{\\sigma}\n\t\\frac{-i \\pi \\rho}{2 k_F\\sqrt{\\tilde{\\omega}^2-\\tilde{\\Delta}^2 + i \\eta}}\n\t\\\\\n\t&\\times\n\t\\left\\{\n\t\t\\tilde{\\omega}_+ \\xi_\\sigma \\tau_0^{\\sigma} + \\sigma \\tilde{\\Delta} \\xi_\\sigma \\tau_x^{\\sigma}\n\t\t+\n\t\t\\left[ (\\kappa_{\\sigma}^2-1) \\xi_\\sigma +\\chi_\\sigma \\right] \\tau_z^{\\sigma}\n\t\\right\\},\n\\nonumber\n\\end{align}\nwhere\n$\\rho = m \/ \\pi$ is the 2D density of states at the Fermi energy,\n$\\tilde{\\omega}=\\omega\/E_F$ and $\\tilde{\\Delta}=\\Delta\/E_F$ are dimensionless frequency and gap, for $E_F = k_F^2\/2m$,\n$\\tau_{\\alpha}^{\\pm}=\\tau_{\\alpha}(\\sigma_0 \\pm \\sigma_z)\/2$,\n$\\eta>0$ is an infinitesimal shift and $\\tilde{\\omega}_+ = \\tilde{\\omega}+i\\eta$.\nFurthermore we have defined\n\\begin{align}\n\t\\kappa_{\\sigma} &= (k_x + \\sigma k_m)\/k_F,\n\\\\\n\t\\xi_\\sigma &= p_{\\sigma,+}^{-1}\\mathrm{e}^{i |y|k_F p_{\\sigma,+}}+ p_{\\sigma,-}^{-1} \\mathrm{e}^{-i |y|k_F p_{\\sigma,-}},\n\\\\\n\t\\chi_\\sigma &= p_{\\sigma,+} \\mathrm{e}^{i |y|k_F p_{\\sigma,+}}+p_{\\sigma,-} \\mathrm{e}^{-i |y|k_F p_{\\sigma,-}},\n\\end{align}\nwith\n\\begin{equation} \\label{eq:p_sigma_pm}\n\tp_{\\sigma,\\pm}=\\bigl[1-\\kappa^2_{\\sigma}\\pm (\\tilde{\\omega}^2-\\tilde{\\Delta}^2 + i\\eta)^{1\/2}\\bigr]^{1\/2}.\n\\end{equation}\nIn Part I we provided a detailed analysis of the importance of using the Green's function of Eq.\\ \\eqref{eq:g_k_y} and not\nany commonly used approximations. Equation \\eqref{eq:g_k_y} remains of fundamental importance in this paper, as any\nsuch approximation would lead to an incorrect topological classification.\n\nThe direct computation of $G$ and $T$ consists of a number of\nmatrix multiplications and inversions and this last step is generally done numerically, though the relatively simple form allows for a number of analytic results which we summarize in the following.\n\nThe poles of the Green's function provide the spectrum, and all\nsubgap states arise from the poles of the $T$ matrix, hence $\\det T^{-1}=0$ at some $|\\omega|<\\Delta$ provides the criterion for\nthe existence of a subgap state.\nThe solution of $\\det T^{-1}(\\omega=0,k_x=0)=0$ is of particular interest because it provides the condition for the interaction strength\n$V_m$ at which the subgap states close the gap at the high symmetry point. One can analytically solve this equation for\nany spiral wavevector. If we define with\n\\begin{equation} \\label{eqn:C_m}\n\tC_m = \\pi \\rho V_m \/ k_F,\n\\end{equation}\nthe dimensionless amplitude of the magnetic scattering strength, then the critical amplitude for the\ngap closure is given by\n\\begin{align}\\label{eqn:kx0gapclosure}\n\tC_m^\\star = \\bigl[\\bigl(1 - k_m^2\/k_F^2\\bigr)^2 + \\tilde{\\Delta}^2\\bigr]^{1\/4}.\n\\end{align}\nAs discussed in Part I the exact result of Eq.\\ \\eqref{eqn:kx0gapclosure} bears a number of interesting features.\nThe exponent of $1\/4$ rather than $1\/2$ as expected by comparison to a purely 1D model (see Sec.\\ \\ref{sec:comparison_to_1D}) occurs due to the dimensional mismatch between the substrate and the impurity chain.\nAt a ferromagnetic interface with $k_m=0$ the gap closing has only a weak dependence on $\\tilde{\\Delta}$ and can\nbe interpreted as the result from the hybridization between the YSR states forming the Shiba bands. On the other hand at $k_m=k_F$\none has $C_m = \\tilde{\\Delta}^{1\/2}$, and thus a gap closure caused by the direct competition between magnetic scattering and pairing.\nThis resembles a dimensionally renormalized Zeeman interaction, and as shown in Part I indeed YSR states and their hybridization are of no importance in this limit. We additionally point out that Eq.\\ \\eqref{eqn:kx0gapclosure} is in excellent agreement with the self-consistent numerical solution of the lattice version of this problem, showing that Eq.\\ \\eqref{eqn:kx0gapclosure} is indeed a general result and not specific to the chosen continuum model.\n\n\n\\section{Topological Hamiltonians}\\label{sec:1D}\n\nThe topological classification of a material is based on the calculation of topological indices.\nTwo types of approaches are common for bulk superconductors, based on either characteristics of the Hamiltonian\nat special points or integrals of Berry type connections over the Brillouin zone.\nIn the former category falls the common $\\mathbb{Z}_2$ characterization determined from the sign of\nPfaffians of matrices proportional to the Hamiltonian at time-reversal symmetric points in the\nBrillouin zone \\cite{Kitaev2001,Kane2005,Fu2006,Stanescu2011}.\nWith such an approach the classification of the purely 1D system of Sec.\\ \\ref{sec:comparison_to_1D} is\nimmediate. The latter category refers to topological indices expressed for example through\nTKNN invariants, Chern numbers and Zak phases \\cite{TKNN,Zak1989}. These cases\nrequire the knowledge of the Bloch wavefunctions. Equivalent indices can be obtained through Green's\nfunctions \\cite{Volovik,Wang2010} which has the advantage that interactions can be included as well\n\\cite{Gurarie2011,Wang2012a,Wang2012b,Rachel2018}.\nYet in their original formulations these indices involve multiple products of Green's functions,\ntheir derivatives, and frequency integrals in addition to momentum integrals.\nA large effort was therefore made to derive simpler equivalent expressions \\cite{Wang2012a,Wang2012b,Wang2012c,Wang2013}.\nNotable is the replacement of the frequency $\\omega$ integral by $\\omega = 0$ and use of the Green's function\nthen to define an effective topological Hamiltonian that correctly captures the topological\nclassification \\cite{Gurarie2011,Wang2012c,Wang2013,Budich2013,Weststrom2016,Xie2020}.\nThe latter is indeed rather intuitive since any Green's function is obtained through matrix elements\nof the resolvent $\\hat{G}(\\omega) = ( \\omega - H )^{-1}$ such that $H = - \\hat{G}^{-1}(0)$.\nSubtleties arise since Green's functions are projections of the resolvent and their inversion\ndoes not reproduce the original (possibly interacting) Hamiltonian. But, notwithstanding the subtleties, they correctly capture\nthe topological classification \\cite{Wang2012c,Wang2013}.\n\nFor a bulk system the topological Hamiltonian can be defined through\n\\begin{equation} \\label{eqn:H_top_bulk}\n\tH^\\text{top}_\\text{bulk}(\\mathbf{k}) = - G_\\text{bulk}^{-1}(\\omega=0,\\mathbf{k}),\n\\end{equation}\nwhere $G_\\text{bulk}$ is the Green's function of the fully translationally symmetric system.\n\nIn the following we will show that a similar approach can be adopted for our situation, although\nwe have neither translational symmetry nor a periodic structure along the $y$ situation.\nDespite this, we will demonstrate that a suitably adapted variant of Eq.\\ \\eqref{eqn:H_top_bulk}\nproduces the correct topological classification if subtleties with the $y$ dependence are\nappropriately taken into account.\n\n\n\\section{Comparison to 1D system}\\label{sec:comparison_to_1D}\n\nTo obtain a baseline for the expected topological classification we start by providing a brief account\nof the straightforward topological classification of a purely 1D model, along with the expected dimensional\nrenormalization due to the embedding in a 2D system.\n\nThe 1D equivalent of Hamiltonian $H=H_0+H_m$ [Eqs.\\ \\eqref{eq:H_0} and \\eqref{eq:H_m}] is in the gauge transformed\nbasis\n\\begin{align}\\label{eqn:1D_Hamiltonian}\n\tH(k_x) = \\sum_{\\sigma} \\epsilon_{k_x+\\sigma k_m} \\tau_z^\\sigma + \\check{V}_m \\tau_z \\sigma_x + \\Delta \\tau_x \\sigma_z,\n\\end{align}\nwritten here not in second quantized form but as a $4\\times 4$ matrix in Nambu-spin space at fixed $k_x$.\nWe identify $\\hat{\\mathbf{e}}_1$ with the spin-$x$ direction and $\\tau_z^\\pm = \\tau_z(\\sigma_0 \\pm \\sigma_z)\/2$\nand denote the magnetic potential $\\check{V}_m$ to avoid confusion with its counterpart in the 2D system.\nSince the $\\check{V}_m$ act on the entire system and not only on a line across the 2D system they take the role of a uniform magnetic field\nwhose original spiral was unwound through the gauge transformation.\nEquation \\eqref{eqn:1D_Hamiltonian} corresponds to the Hamiltonian of a ``Majorana wire'' \\cite{DasSarma2010,vonOppen2010,Lutchyn2010}, which has a known $\\mathbb{Z}_2$ topological classification that can be obtained from the Pfaffians of the Hamiltonian at the time-reversal symmetric momenta \\cite{Kitaev2001}.\n\nUsing this Hamiltonian we calculate the topological invariant in the usual way by transforming $H(0)$ to a skew symmetric matrix $UH(0)$,\nwhere $U=\\sigma_x \\tau_x$ [taking this form because of the chosen Nambu-spin basis given by Eq.\\ \\eqref{eq:basis}], and by determining the sign of\nthe Pfaffian $\\mathrm{pfaff}[UH(0)]$. The resulting phase diagram is plotted in Fig. \\ref{fig:1D_phase_diagram} and shows the two distinct topological phases with the transition controlled by $\\check{V}_m$. We should remark that for the continuum model there is only one time-reversal symmetric momentum, $k_x=0$, whereas in a lattice system there would also be the momentum at the boundary of the Brillouin zone.\nIn the latter this second momentum is responsible for a re-entrance to the topologically trivial phase at large magnetic interaction strength\nwhich is absent in the present continuum model.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\columnwidth]{figure2}\n\t\\caption{\\label{fig:1D_phase_diagram}%\n\tTopological phase diagram for a pure 1D system with a spiral magnetic field, shown as function\n\tof field winding momentum $k_m$ versus field strength $\\check{V}_m$, for $\\Delta = 0.1 E_F$.\n\tThe white are is topologically trivial and the\n\tshaded area is nontrivial. The separating curve is described by Eq.\\ \\eqref{eqn:1D_phase_boundary}.\n\tIn this pure 1D case the minimum at $k_m=k_F$ is reached at $\\check{V}_m = \\Delta$.\n\t}\n\\end{figure}\n\nThe boundary between two topologically distinct phases is characterized by a gap closure at a time-reversal invariant momentum.\nIf we set $\\check{C}_m = \\check{V}_m\/E_F$ in analogy to Eq.\\ \\eqref{eqn:C_m} and $\\check{\\Delta}=\\Delta\/E_F$, the gap closure at $k_x=0$ for the 1D Hamiltonian\nrequires an interaction strength $\\check{C}_m = \\check{C}_m^\\star$, with\n\\begin{align}\\label{eqn:1D_phase_boundary}\n\t\\check{C}_m^\\star = \\bigl[\\bigr(1- k_m^2\/k_F^2\\bigl)^2 + \\check{\\Delta}^2\\bigr]^{1\/2}.\n\\end{align}\nThis critical amplitude has the same functional form as its 2D counterpart $C^\\star$ given in Eq.\\ \\eqref{eqn:kx0gapclosure}\nbut with the exponent $1\/2$ instead of $1\/4$. This change is a dimensional renormalization, as mentioned above and explained further in Part I, due to the fact that in contrast to the 2D case $\\check{V}_m$ acts on the full transverse extension of the wave functions.\nBesides this dimensional renormalization the subgap states remain confined to the vicinity of the magnetic chain. We may thus expect that they retain a 1D character so that up to a renormalization of the phase boundaries the phase diagram itself remains unchanged from the 1D case. As a motivational argument we may indeed consider a procedure that continuously provides an increasing confinement transforming the 2D system into the pure 1D system. If this is done in each gapped phase in a manner such that the gap never closes then the topological class of the subgap states should not change.\n\nSuch an argument alone, however, is naive as it neglects that in the 1D case an extra confining potential is required, whereas in 2D the\nconfinement of the subgap states is controlled by $\\Delta$. In the transition between 1D and 2D there is therefore a length scale at which\nthe boundary condition for the confinement changes its physical origin. Since topology depends on global properties of the wave functions\na change of boundary condition must always be considered carefully, and we will see that indeed the extension of wave functions\nacross this scale is of importance.\nWe therefore must consider in more detail the subtleties arising from the loss of translational symmetry or periodicity in the $y$ direction.\n\n\n\\section{Localized classification}\\label{sec:localized_classification}\n\n\\subsection{Absence of an effective 1D Hamiltonian} \\label{sec:absence_1D_H}\n\nDue to the exponential confinement of the subgap wave functions to the region near the magnetic chain the electron motion is one-dimensional.\nOne may thus consider a description in terms of an effective 1D Hamiltonian, similar to those used for the 1D states appearing through confinement in heterostructures or to the edge bands in topological systems.\n\nA complication arises here from the fact that the Nambu-spin and $y$ degrees of freedom are highly mixed in the wave functions, as visible in Eqs.\\ \\eqref{eq:g_k_y}--\\eqref{eq:p_sigma_pm}, whereas the conventional topological classification tools for 1D systems rely on the Nambu-spin structure alone. In the following sections we provide a systematic discussion that in such a case the topology of the subgap states can be reliably extracted by pinning $y$ to the special value $y=0$, followed by an exploration of the changes for $y \\neq 0$.\n\nIn this section, however, we analyze the conditions under which the $y$ coordinate can be traced out entirely while maintaining the validity and convenience of the Nambu-spin based classification scheme. We formulate two conditions, (a) and (b) below, that a reduced Hamiltonian $H_\\text{eff}^\\text{1D}$ should fulfil and show that these conditions have a close connection with Choi's theorem on completely positive trace preserving maps \\cite{Choi1975,Stinespring1955}. Based on this we demonstrate that fulfilment of the conditions necessarily imposes a complete separability of the Nambu-spin and $y$ degrees of freedom. This separability is generally not fulfilled in the present case and thus such an effective Hamiltonian cannot be constructed.\nA notable exception though is the regime in which the LWA is valid. For the latter the necessary separability is approximately true, explaining why for such a situation a topological classification based on a simple tracing out of $y$ provides correct results.\n\nA dimensional reduction is an often tacitly used procedure in the study of low dimensional systems. A quantum dot, for instance, is addressed commonly by operators creating and annihilating its different levels as entities without addressing the specific spatial structure. Interactions such as the Coulomb repulsion or spin-orbit are effective integral quantities coupling the different levels. Such a description results from\nfirst analyzing the confinement of some non-interacting Hamiltonian, providing the set of basis functions for the confined geometry, and then expanding the full Hamiltonian in this basis. The eigenstates and the spectrum are then obtained by the diagonalization of the resulting Hamiltonian matrix, with the eigenstates given by an appropriate decomposition of the basis functions.\n\nOur situation is distinct in that we already fully know the confined eigenstates. We have thus a different goal with the extraction of a lower-dimensional Hamiltonian. As explained above our goal is to be able to work with the Nambu-spin based symmetries and topological classification methods without having to maintain the $y$ dependence and, especially, without having to modify the methods.\n\nThe following proof when this is possible is not specific for the considered situation but general for any type of Hamiltonian with a finite subset of discrete, localized states that are split off from the continuum.\n\n\nFor a fixed $k_x$ any full 2D Hamiltonian can formally be written as\n\\begin{align}\n\tH(k_x)\n\t&=\n\t\\sum_{n=1}^{N_n} \\omega_n(k_x)\\ket{k_x,n}\\bra{k_x,n}\n\t\\notag\\\\\n\t&+ \\int d\\alpha \\, \\epsilon_{\\alpha}(k_x)\\ket{k_x,\\alpha}\\bra{k_x,\\alpha},\n\\end{align}\nwhere $n$ labels the $N_n$ discrete subgap bands and $\\alpha$ the continuum states. In our case with two subgap bands\nwe have $N_n = 2$, but we keep this number general, yet finite, for the following discussion.\n\nThe extraction of a 1D Hamiltonian requires two steps,\nthe rather easy projection on subgap energies to remove the continuum states, and the elimination of the $y$ coordinate.\n\n\nIn the following we keep $k_x$ as a fixed parameter and omit it from the notation for simplicity, without loss of generality.\nThe energy projection results in the Hamiltonian\n\\begin{equation}\n\tH'\n\t=\n\t\\sum_{n=1}^{N_n} \\omega_n \\ket{n}\\bra{n}.\n\\end{equation}\nThe states $\\ket{n}$ span an $N_n$ dimensional subspace $\\mathcal{H}'$ of the Hilbert space\n$\\mathcal{H}_{Ns} \\otimes \\mathcal{H}_y$, where $\\mathcal{H}_{Ns}$ is the Nambu-spin space and\n$\\mathcal{H}_y$ is the space of square integrable functions of $y$.\n\nWe then seek a mapping $\\Omega$ between operators on $\\mathcal{H}'$ and operators on $\\mathcal{H}_{Ns}$\nsuch that $H_\\text{eff}^\\text{1D} = \\Omega(H')$.\nWe impose the following two conditions such a mapping needs to fulfil:\n\\begin{enumerate}[(a)]\n\\item\nThe expectation values of any operator $A$ on $\\mathcal{H}_{Ns}$, acting with the identity on $\\mathcal{H}_y$,\n must remain invariant. This means we impose\n\\begin{equation} \\label{eq:cond_a}\n\t\\bra{n} A \\ket{n'} = \\mathrm{Tr}\\{ \\ket{n'}\\bra{n} A \\} = \\mathrm{Tr}\\{ \\Omega(\\ket{n'}\\bra{n}) A\\}.\n\\end{equation}\nNotice that $A$ is kept outside the $\\Omega$ mapping, which is not a physical requirement\nbut the choice of convenience mentioned above.\n\\item\nFor each orthogonal projector $\\ket{n}\\bra{n}$ the mapping produces again\nan orthogonal projector, $\\Omega(\\ket{n}\\bra{n}) = \\ket{u_n}\\bra{u_n}$,\nwith $\\ket{u_n}$ in $\\mathcal{H}_{Ns}$ such that $\\bracket{u_n}{u_{n'}} = \\delta_{n,n'}$.\n\\end{enumerate}\nCondition (a) is the more stringent one, but condition (b) is the physical requirement as it\nensures that $H_\\text{eff}^\\text{1D}$ remains a Hamiltonian\non $\\mathcal{H}_{Ns}$ with a spectral decomposition and the same spectrum. An immediate necessary\ncondition for (b) is that $N_n \\le \\dim(\\mathcal{H}_{Ns})=4$.\n\nTo evaluate the consequences of condition (a) let us choose a set of states $\\ket{\\phi_p} \\in \\mathcal{H}_y$, for $p=1,\\dots, N_p$, representing functions $\\phi_p(y)$ such that\n\\begin{equation}\n\t\\ket{n} = \\sum_{p=1}^{N_p} \\ket{v_n^p} \\otimes \\ket{\\phi_p},\n\\end{equation}\nwith $\\ket{v_n^p}$ in $\\mathcal{H}_{Ns}$.\nWe assume that $N_p$ is finite, and we see from Eqs.\\ \\eqref{eq:g_k_y}--\\eqref{eq:p_sigma_pm} that the $\\phi_p(y)$\nindeed are expressed by the small set of functions $\\exp(\\pm i |y| k_F p_{\\sigma,\\pm})$. Through an orthogonalization procedure such\nas the Gram-Schmidt method we can choose the $\\ket{\\phi_p}$ to be orthonormal, $\\bracket{\\phi_p}{\\phi_{p'}} = \\delta_{p,p'}$.\nThe normalization imposes furthermore that $\\bracket{v_n^p}{v_n^p} = 1$ but otherwise there is no requirement for\northogonality on the $\\ket{v_n^p}$.\nEquation \\eqref{eq:cond_a} is then equal to\n\\begin{equation}\n\t\\bra{n} A \\ket{n'}\n\t= \\sum_{p=1}^{N_p} \\bra{v_n^p} A \\ket{v_{n'}^p}\n\t= \\mathrm{Tr}\\Bigl\\{ \\sum_{p=1}^{N_p} \\ket{v_{n'}^p}\\bra{v_n^p} A \\Bigr\\}.\n\\end{equation}\nThis relation must hold for any $A$ and consequently\n\\begin{equation} \\label{eq:Omega}\n\t\\Omega(\\ket{n'}\\bra{n})\n\t= \\sum_{p=1}^{N_p} \\ket{v_{n'}^p} \\bra{v_n^p}\n\t= \\sum_{p=1}^{N_p} V_p \\ket{n'} \\bra{n} V_p^\\dagger.\n\\end{equation}\nThe mapping $\\Omega$ therefore takes the form of a Kraus decomposition \\cite{Kraus1971,Choi1975}\nwith the Kraus operators $V_p = \\openone_{Ns} \\otimes \\bra{\\phi_p}$, where $\\openone_{Ns}$ is the\nidentity on Nambu-spin space. Noting that $\\sum_p V_p^\\dagger V_p$ produces the identity on $\\mathcal{H}'$\nwe find that $\\Omega$ falls in the remit of Choi's theorem \\cite{Choi1975}, which states that any\nlinear mapping from bounded operators acting on $\\mathcal{H}'$ to operators acting on $\\mathcal{H}_{Ns}$\nthat is completely positive and trace preserving is necessarily of the form of Eq.\\ \\eqref{eq:Omega}.\n\nThe minimum number $N_p$ of necessary Kraus operators is known as the Choi rank, but otherwise the\n$V_p$ can be freely chosen as long as they fulfil Eq. \\eqref{eq:Omega} and the identity condition on $\\mathcal{H}'$.\n\nWe turn then to condition (b) and ask which choice of Kraus operators can guarantee the correct\nmapping of projectors, which thus has to take the form\n\\begin{equation}\n\t\\sum_{p=1}^{N_p} V_p \\ket{n}\\bra{n} V_p^\\dagger\n\t= \\sum_{p=1}^{N_p} \\ket{v_n^p} \\bra{v_n^p} = \\ket{u_n}\\bra{u_n}.\n\\end{equation}\nSince $\\dim(\\mathcal{H}_{Ns})=4$ we can represent $\\ket{v_n^p}$ as a $4 \\times N_p$\nmatrix $\\mathcal{V}_n$, and $\\ket{u_n}$ as a length 4 column vector $\\mathcal{U}_n$ such that\nthe latter equation becomes $\\mathcal{V}_n \\mathcal{V}_n^\\dagger = \\mathcal{U}_n \\mathcal{U}_n^\\dagger$.\nThis means $\\mathcal{V}_n$ needs to be of rank 1, and therefore all its columns are directly linearly dependent.\nIn this case we have $\\ket{v_n^p} = \\lambda_n^p \\ket{u_n}$ where the $\\lambda_n^p$ are numbers such that\n$\\sum_{p=1}^{N_p} |\\lambda_n^p|^2 = 1$. This, however, also imposes that\n\\begin{equation} \\label{eq:separable}\n\t\\ket{n} = \\ket{u_n} \\otimes \\sum_{p=1}^{N_p} \\lambda_n^p \\ket{\\phi_p}\n\t\\equiv \\ket{u_n} \\otimes \\ket{\\psi_n}.\n\\end{equation}\nThis result shows that conditions (a) and (b) are only compatible if the states $\\ket{n}$ are\nseparable in the sense of Eq.\\ \\eqref{eq:separable} in that for each $n$ the $y$ dependence is in a\nsingle function $\\psi_n(y) = \\bracket{y}{\\psi_n}$ multiplying the Nambu-spin states $\\ket{u_n}$.\nThe $\\ket{u_n}$ must be orthogonal but there is no orthogonality condition on the $\\ket{\\psi_n}$,\nonly normalization as $\\bracket{\\psi_n}{\\psi_n} = 1$. Note that Eq.\\ \\eqref{eq:separable} does not imply that the Choi\nrank is $N_p=1$ as the $\\ket{\\psi_n}$ can be different for different $n$.\n\nFor separable $\\ket{n}$ the mapping $\\Omega$ becomes then particularly simple and results in just\ntracing out of the $y$ degrees of freedom,\n\\begin{multline}\n\tH_\\text{eff}^\\text{1D}\n\t= \\mathrm{Tr}_y\\{ H' \\}\n\t= \\int dy \\, \\bra{y} H' \\ket{y}\n\\\\\n\t= \\sum_{n=1}^{N_n} \\omega_n \\ket{u_n} \\bra{u_n} \\int dy |\\bracket{y}{\\psi_n}|^2\n\t= \\sum_{n=1}^{N_n} \\omega_n \\ket{u_n} \\bra{u_n}.\n\\end{multline}\nThis result is remarkable in the sense that it confirms that for separable wave functions the elimination of the confining\ndegree of freedom by the intuitive simple integration is indeed the \\emph{only} way that does not change the physics of the other degrees of freedom.\nSeparability is also encountered often for wave functions confined\nby some potential such as created by a heterostructure, or of edge states in quantum Hall systems, topological insulators, or topological superconductors, in which the envelope does not depend on spin, and in which futher spatially dependent interactions that can hybridize the states are absent or negligible. In such a case it is straightforward to integrate out the spatial dependence and obtain an effective lower dimensional Hamiltonian for the bound states only.\n\n\nOn the other hand, if the states are not separable a Hamiltonian $H_\\text{eff}^\\text{1D}$ satisfying\nboth conditions (a) and (b) cannot be constructed. This indeed the general case\nfor the subgap states at the magnetic chain.\nAs mentioned before this\n\nis seen from Eqs.\\ \\eqref{eq:g_k_y}--\\eqref{eq:p_sigma_pm} through the amplitudes $\\xi_\\sigma$ and $\\chi_\\sigma$,\nand their dependence on $p_{\\sigma,\\pm}$. For $|\\omega|<0$ the latter satisfy $p_{\\sigma,+}=p_{\\sigma,-}^*$.\nIf we thus let $p_{\\sigma,\\pm} = p_{\\sigma} \\exp(\\pm i \\varphi)$ for $p_\\sigma = |p_{\\sigma,+}|$ and\n$\\varphi = \\pm \\mathrm{arg}(p_{\\sigma,\\pm})$, we see that\n\\begin{align}\n\t\\xi_\\sigma &= 2p_{\\sigma}^{-1} \\cos(y k_F p_{\\sigma} \\cos(\\varphi) - \\varphi) e^{- |y| k_F p_\\sigma \\sin(\\varphi)},\n\\label{eq:xi_sigma_phi}\n\\\\\n\t\\chi_\\sigma &= 2 p_{\\sigma} \\cos(y k_F p_{\\sigma} \\cos(\\varphi) + \\varphi) e^{- |y| k_F p_\\sigma \\sin(\\varphi)}.\n\\label{eq:chi_sigma_phi}\n\\end{align}\nAs long as $p_{\\sigma,\\pm}$ is complex the division and multiplication by $p_{\\sigma,\\pm}$ adds opposite\nphase offsets $\\pm \\varphi$ to the $y$ dependent oscillations of $\\xi_\\sigma$ and $\\chi_\\sigma$, so that the\n$y$ dependence is not globally factorizable from the different terms of the wave function,\nthus violating the separability of the wave function. The imaginary\npart of $p_{\\sigma,\\pm}$ is furthermore required for the exponential confinement and\u00a0exists whenever\n$|\\omega|<\\Delta$.\n\nTo substantiate that indeed these factors of the Green's functions provide the relevant amplitudes\nof the wave function let us note that we can write\n\\begin{equation}\n\t\\bracket{y}{k_x,n} \\bracket{k_x,n}{y'} = \\oint_{C_{k_x}} \\frac{d\\omega}{2\\pi i} G(\\omega,k_x,y,y'),\n\\end{equation}\nwhere $C_{k_x}$ is a positively oriented closed contour encircling only the isolated pole $\\omega_n(k_x)$ of the Green's function.\nSince at $|\\omega|<\\Delta$ the pole arises from the $T$ matrix we have\n\\begin{align}\n\t\\bracket{y}{k_x,n} \\bracket{k_x,n}{y'} = &g(\\omega_n(k_x),k_x,y) \\mathrm{Res} T(\\omega_n(k_x),k_x)\n\\notag\\\\\n\t&\\times\n\tg(\\omega_n(k_x),k_x,-y'),\n\\end{align}\nwith $\\mathrm{Res}T$ the residue of the $T$ matrix. Any $y$ dependence is thus due to $g(\\omega_n(k_x),k_x,y)$ and any $y'$ dependence\nto $g(\\omega_n(k_x),k_x,-y')$. Hence the Green's functions $g$ directly define the $y$ dependence of the wave function, containing\nthe exponential envelopes and the oscillations. As they are\nnot separable in the sense above, the subgap states do not allow the reduction to an effective 1D Hamiltonian.\n\nWe should stress, however, that the lack of separability requires that the effect of the difference between the\n$p_{\\sigma,\\pm}$ is notable, and situations can exist in which approximate separability and thus an approximately\nvalid 1D Hamiltonian can be obtained. Such a situation occurs when the exponential decay is fast compared\nwith the oscillation period, expressed by the condition $\\mathrm{Im}p_{\\sigma,+} \\gg \\mathrm{Re}p_{\\sigma,+}$.\nFrom Eq.\\ \\eqref{eq:p_sigma_pm} we see though that in the topologically most interesting limit of $\\omega \\to 0$\nthis condition does not hold. We then instead must consider the situation in which the phase shift $\\varphi$ making\nthe oscillations of $\\xi_\\sigma$ and $\\chi_\\sigma$ distinct is negligible.\nSince the characteristic range over which $y$ is evaluated is set by the decay length $1\/k_F p_\\sigma \\sin(\\varphi)$\nwe see from Eqs.\\ \\eqref{eq:xi_sigma_phi} and \\eqref{eq:chi_sigma_phi} that the phase difference $\\pm \\varphi$ can be neglected when\n$\\cot(\\varphi) \\pm \\varphi \\approx \\cot(\\varphi)$, which is the case when $\\cot(\\varphi) \\gg 1$. This represents thus\nthe limit $\\mathrm{Im}p_{\\sigma,+} \\ll \\mathrm{Re}p_{\\sigma,+}$, which is precisely the limit in which the\nlong wavelength approximation (LWA) is applicable (see Part I). Full separability is then still not guaranteed as long\nas $p_{\\sigma,\\pm}$ have different spin $\\sigma$ dependence. But at the topologically most significant $k_x=0$\nthis spin dependence drops out and an approximate 1D Hamiltonian can be obtained by integrating out the $y$\ndependence. This property confirms why this method of obtaining such a Hamiltonian produces valid results\nin the LWA limit.\n\nOn the other hand, as discussed in depth in Part I, the range of applicability of the LWA becomes more and\nmore restricted for increasing $k_m$ and breaks down entirely at $k_m = k_F$, at which indeed\n$\\mathrm{Im}p_{\\sigma,+} = \\mathrm{Re}p_{\\sigma,+}$ for $k_x=0$.\nFor the topological classification of the subgap states we therefore need a different approach which\nwe will describe next.\n\n\n\n\n\\subsection{Dimensional embedding}\\label{sec:1D_in_2D}\n\nAlthough it is not possibile to obtain an effective 1D Hamiltonian the wave functions remain 1D and we can expect that still some adjustment of the 1D topological classification schemes remains applicable. We thus aim to extract a 1D Hamiltonian solely for the purpose of the topological classification at the expense of removing any other physical significance. To this end it is useful to examine the analogy of how 1D topological invariants arise as weak 2D topological indices in particular directions.\nFor comparison we consider the example provided in Ref.\\ \\cite{Nagaosa2012} through a generalized model of a $p+ip$ superconductor on a 2D square lattice. Instead of performing a full 2D analysis, in this paper one of the momentum components $k_x$ or $k_y$ is treated as a fixed parameter and tuned to a time-reversal invariant point. In terms of the other momentum the Hamiltonian describes an effective 1D system, which in this case is equivalent to the Kitaev chain of a topological triplet superconductor. For the latter the topological classification is determined in the standard 1D way, and the obtained topological indices are identified with the weak topological 1D indices of the 2D system. The combination of the weak indices provides the characterization of the full 2D system. The effective 1D Hamiltonians do not necessarily have any direct physical significance but capture the topology at the significant time-reversal symmetric points. Since the system is translationally invariant these points are labelled by the momenta $k_x$ and $k_y$.\n\nWe are aiming for a similar extraction of an effective topological Hamiltonian. But due to the lack of translational symmetry along $y$ such a momentum space extraction of 1D Hamiltonians is not possible. To obtain the correct modification let us recall the role of time-reversal symmetric points. In a fermionic system with time-reversal symmetry each eigenstate has an orthogonal Kramers partner, its time reversed counterpart of opposite momentum and equal energy. At a time-reversal symmetric point the momenta of the Kramers partners coincide but their orthogonality prevents them from hybridizing and lifting the energy degeneracy. Only if more than one Kramers pair is present is a hybridization possible between states not belonging to the same pair, and only in the presence of an even number of Kramers pairs can the degeneracy be lifted entirely. The parity of the number of Kramers pairs is expressed through the $\\mathbb{Z}_2$ index associated with the time-reversal symmetric point, and the impossibility to hybridize defines a topologically nontrivial state. Although most of the considered 1D topological systems involve some magnetic elements breaking time-reversal there is throughout either an emergent or an effective time-reversal symmetry \\cite{Beck2021} for the relevant states so that the $\\mathbb{Z}_2$ classification remains a valid standard tool. A similar choice, yet without any justification of the used topological Hamiltonian, was applied for a tight-binding model at $y=0$ in Ref.\\ \\cite{Sedlmayr2021}.\n\nFor the present case and in the limit of a large bulk gap $\\Delta$ the wave functions are in the $y$ direction confined essentially to the magnetic chain position. The time invariant point is then given by $k_x=0$ and, through the confinement, by $y=0$. A classification through a topological Hamiltonian has to focus on this point. For a smaller $\\Delta$ the wave functions widen around $y=0$ but any motion is still possible only in the $x$ direction. The relevant time-reversal points remain $k_x$ and $y$ dependent. We notice that the operation of time-reversal on the $y$ dependence of the Green's function is to transform the latter as $G(y,y') \\to G(y',y)$, and time-reversal invariance requires thus $y'=y$. This includes the chain centre $y=y'=0$ which will provide the primary criterion for the topological classification. But it further allows the characterization at $y=y'\\neq 0$. As discussed in Sec.\\ \\ref{sec:absence_1D_H} we must not integrate out the $y$ dependence, and instead below we will explore it further.\n\n\n\n\nConsequently we define the $y$ dependent family of topological Hamiltonians through \\cite{CarrollPhD2019}\n\\begin{align}\\label{eqn:1DHam}\n\tH^\\text{top}_\\text{1D}(y) = - \\left[G(\\omega=0, k_x=0, y, y'=y)\\right]^{-1},\n\\end{align}\nwhere $\\omega = 0$, $k_x = 0$ and $y = y'$ are chosen to fulfil the necessary symmetry conditions of\nparticle-hole symmetry at time-reversal invariant points in configuration space. The inverse is taken of\nthe $4 \\times 4$ matrix $G(0,0,y,y)$.\n\nThe Hamiltonians $H^\\text{top}_\\text{1D}(y)$ represent a class of Hamiltonians obtained by slicing the 2D system\ninto effective 1D segments at a distance $y$ from the impurity chain. In this sense they are similar to the effective 1D Kitaev chain type Hamiltonians used for the determination of the weak 1D indices in the bulk system, with $y$ replacing the use of a momentum as parameter. But the $y$ parameters are not limited to special values as time-reversal symmetry is built in through $y'=y$ in the Green's function, and $y$ is tunable through all values. We will show that these Hamiltonians correctly produce the topological behaviour of the subgap states in the vicinity of $y=0$, reproducing the topological phase diagram of the pure 1D chain when taking into account the renormalized critical coupling strengths.\nThe correctness of the Green's function at all wavelengths emphasized in Part I is of crucial importance here for the validity of the phase diagram, as could already be deduced from its significance on the non-separability of the $y$ dependent wave function discussed in Sec.\\ \\ref{sec:absence_1D_H}.\n\nAs the subgap bands are exponentially localized at the chain, the topology at large $y$ must become trivial.\nSince the topological indices are integers the passage to a trivial topology has to be abrupt and there must exist an effective\nboundary between the region near the chain and the rest of the superconductor. Through $H^\\text{top}_\\text{1D}(y)$ we can\ncapture this behaviour, but we should emphasize that $H^\\text{top}_\\text{1D}(y)$ is only to be taken as an archetypical\nrepresentative of $y$ dependent topological Hamiltonians. The pure topological and not physical interpretation is furthermore underlined by noting that in addition to the symmetry considerations the classification depends on the change of the sign of eigenvalues about the Fermi level and not necessarily on the eigenvalues passing through the Fermi level \\cite{Gurarie2011,Wang2012a,Wang2012b,Wang2012c,Wang2013,Budich2013,Weststrom2016}.\nSince the states do not change, the transition to the trivial phase with increasing $y$ indeed cannot rely on Fermi level crossings and, as further investigated below, is instead bound to divergences in the spectrum of $H^\\text{top}_\\text{1D}(y)$ due to zeros in the defining Green's function,\nwhich themselves are the expressions of nodes in the subgap wave functions.\n\n\nBefore continuing we should mention that alternative classification methods for spatially inhomogeneous systems\nwere put forward several years ago in the form of local Chern markers \\cite{Bianco2011}, the Bott index \\cite{Hastings2011},\nand non-commutative Chern numbers or Chern number densities \\cite{Prodan2010,Prodan2011,Mascot2019a,Mascot2019b}. Such\nquantities allow spatial variations in the topological classification.\nThese approaches replace the derivatives in momentum space for the usual Chern numbers by traces over local coordinates\nin real space together with projections onto occupied states. We found though that for our current purpose the\nmethod we propose is more readily accessible and provides the correct topological classification.\n\n\n\n\\subsection{Topological classification near the chain}\n\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\columnwidth]{figure3}\n\t\\caption{\\label{fig:2D_topol_phasediagram}%\n\tTopological phase diagram obtained from the topological Hamiltonians $H^\\text{top}_\\text{1D}(y=0)$ as a function of spiral wave\n\tnumber $k_m$ and magnetic scattering strength $C_m$, for $\\tilde{\\Delta}=0.1$.\n\tThis diagram corresponds to the phase diagram of the pure 1D model of Fig.\\ \\ref{fig:1D_phase_diagram}, with the same colour coding,\n\tupon the discussed dimensional renormalization, with values $C_m^\\star$ [Eq.\\ \\eqref{eqn:kx0gapclosure}] marking the transition\n\tby the solid line, instead of the values $\\check{C}_m^\\star$ [Eq.\\ \\eqref{eqn:1D_phase_boundary}].\n\tNotably the transition at $k_m=k_F$ is now at the larger $C_m = \\tilde{\\Delta}^{1\/2}$ instead of $C_m \\propto \\tilde{\\Delta}$.\n\t}\n\\end{figure}\n\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\columnwidth]{figure4}\n\t\\caption{\\label{fig:topol_phasediagram_num}%\n\tTopological phase diagram obtained from the self-consistent numerical solution of the matching tight-binding model\n\tdescribed in Appendix \\ref{app:numerics}, as a function of spiral wave number $k_m$ and magnetic scattering strength\n\t$\\hat{V}_m$. Scales are given in units of the hopping integral $t$ and the lattice constant $a$.\n\tThe pairing interaction and chemical potential are chosen to produce $\\Delta \\approx 0.1 t$ and $k_F a \\approx 0.65$.\n\tAll the features of the analytic model are perfectly reproduced, only the numerical values of $\\hat{V}_m$ are not directly\n\tcomparable with $C_m$ because of the involved different density of states and effective mass.\n\t}\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\columnwidth]{figure5}\n\t\\caption{\\label{fig:lowest_eigval}%\n\t\tReal space map of the absolute square of the wave function for the smallest\n\t\teigenvalues of a real space system of $600 \\times 70$ sites with a spiral magnetic chain extending\n\t\tbetween sites $x=-224$ to $x=225$ at $y=0$, with spiral wave vector $k_m = k_F$. Darker pixels show a larger\n\t\tamplitude, the eigenenergies are $\\pm E$ as shown in the panels. The shown amplitudes are summed over spin and particle-hole\n\t\tcomponents, and for better visualization of the end states we have summed furthermore over both the $+E$ and $-E$ amplitudes.\n\t\tThe magnetic impurity\n\t\tpotentials $\\hat{V}_m$ are chosen to lie (a) below, (b) at, and (c) above the gap closing strength\n\t\t$\\hat{V}_m = \\hat{V}_m^\\star = ( 4 t \\Delta)^{1\/2}$, with $t$ the hopping integral and $\\Delta=0.1 t$ the gap function.\n\t\tPanel (c) demonstrates the topological nature of the transition through the appearance of the Majorana end states\n\t\twith energy $E \\approx 0$ within the accuracy of the remaining finite size wave function overlap.\n\t}\n\\end{figure}\n\n\n\nSince $H^\\text{top}_\\text{1D}(y)$ are matrices in Nambu-spin space their topological classification is most easily done\nthrough the Pfaffians at time-reversal symmetric points, which for the continuum model is reduced to the\nbehaviour at $k_x=0$ in the $k_m$ shifted basis.\nThe relevant topological index is then as in the 1D case above determined by the sign of $\\mathrm{pfaff}[ UH^\\text{top}_\\text{1D}(y)]$ \\cite{Stanescu2011},\nwhere the matrix $U=\\sigma_x \\tau_x$ again transforms the Hamiltonian to a skew symmetric matrix.\nIn Fig.\\ \\ref{fig:2D_topol_phasediagram} we plot the resulting topological phase diagram for the topological\nHamiltonian at the position $y=0$ of the impurity chain\nas a function of spiral winding $k_m$ and dimensionless magnetic interaction strength $C_m$ [see Eq.\\ \\eqref{eqn:C_m}].\nThe shaded areas are the topologically nontrivial range.\nIn comparison with Fig.\\ \\ref{fig:1D_phase_diagram} we see that the results perfectly reflect\nthe phase diagram of the pure 1D system under the aforementioned dimensional renormalization.\nThe phase transition occurs when the subgap bands touch at the Fermi level at $k_x=0$. This is exactly at the critical interaction strength $C_m^\\star$ given in Eq.\\ \\eqref{eqn:kx0gapclosure} which replaces the $\\check{C}_m^\\star$ of the pure 1D system of Eq.\\ \\eqref{eqn:1D_phase_boundary}.\nAs there is no other gap closing at $k_x=0$ and for the continuum model there is no finite momentum at the edge of the Brillouin zone there is no mechanism for a phase transition at any other interaction strength.\n\nTo corroborate the validity of these results by an independent method we compare them with the numerical solution of the\ntight-binding model that has already provided excellent quantitative verification in Part I. We perform two validations,\nthe first by comparing the matching topological invariants, and the second by demonstrating the appearance of zero modes\nlocalized at the edges of a finite chain.\n\nFor the first verification we also use the Pfaffians of the topological Hamiltonians for which we compute the Green's functions through their Lehmann representation from the eigenvalues and eigenvectors of the full 2D Hamiltonian. Appendix \\ref{app:numerics} contains a further\ndescription of the numerical evaluation.\nThe numerical results are shown in Fig.\\ \\ref{fig:topol_phasediagram_num}, in which we again plot the diagram as function of $k_m$ and the magnetic scattering strength which we denote for the tight-binding model by $\\hat{V}_m$.\nThe agreement is excellent as the phases and the shape of the phase transition line are perfectly matched. We should only note that\nthe numerical values of $\\hat{V}_m$ for the transition are not the same because the densities of state of the two models are different.\nWe remark furthermore that for the tight-binding model we have only considered $k_x=0$ and not its second time-reversal symmetric point $k_x=\\pi\/a$ at the edge of the Brillouin zone, as the latter is absent in the continuum model.\nWe thus exclude in the tight-binding model the possibility to leave the topological phase at large $C_m$ due to a gap closing at $k_x=\\pi\/a$.\n\nThe second verification of the validity of the topological classification through $H^\\text{top}_\\text{1D}(y=0)$\nis shown in Fig. \\ref{fig:lowest_eigval}. In this figure we display for spiral wave vector $k_m=k_F$ how the wave functions of the\neigenvalues $\\pm E$\nclosest to the Fermi level change from an extended 1D state to localized end states when $\\hat{V}_m$\nchanges across the gap closing interaction strength $\\hat{V}_m^\\star$ corresponding to $C_m^\\star$ in the continuum model.\nFor better visualization we plot the sum of the amplitudes of the two wave functions for $\\pm E$. Due to particle-hole\nsymmetry the amplitudes are the same for the extended states, and for the localized end states we assure in this way that\nthe states at both ends are visible. We verify furthermore that the values of $E$ (shown as labels in the figure)\ndecrease to $E=0$ within the numerical accuracy. Only these states are localized and we verified that the other eigenstates remain extended.\nThese end states are thus indeed the particle-hole symmetric Majorana bound states expected from a transition to\nthe topologically nontrivial phase.\nThrough these verifications we can thus confirm that the topological Hamiltonian $H^\\text{top}_\\text{1D}(y=0)$\nindeed produces the correct topological classification.\n\n\n\\subsection{Topology at $y \\neq 0$}\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{figure6}\n\t\\caption{\\label{fig:topol_phasediagram}%\n\tTopological phase diagrams obtained from the topological Hamiltonians $H^\\text{top}_\\text{1D}(y)$ as a function of spiral wave\n\tnumber $k_m$ and magnetic scattering strength $C_m$, for various $y$ and $\\tilde{\\Delta}=0.1$.\n\t(a) is identical to Fig.\\ \\ref{fig:2D_topol_phasediagram} and displays the principal phase diagram at $y=0$.\n\tThe solid line shows the primary transition at $C_m^\\star$ [Eq.\\ \\eqref{eqn:kx0gapclosure}] between the topologically trivial (white)\n\tand nontrivial (blue) regions.\n\tPanels (b)--(e) show with the dashed line the appearance at $y\\neq 0$ of the second transition at\n\tstrength $C_m^{\\star\\star}$ [Eq.\\ \\eqref{eqn:zerolocations}], determined by the zeros of the Green's function. At large $y$\n\tthe region spanned between both lines shrinks to zero such that the system becomes trivial throughout. At intermediate distances\n\toscillations of the dashed line about the solid line show that at the same interaction strength a region can change topology\n\tseveral times with $y$, and some trivial regions at $y=0$ can become nontrivial at some nonzero $y$.\n\tThe insets display the corresponding diagrams for the numerical solution of the tight-binding model, and\n\tshow a remarkable correspondence with the continuum model.\n\tDifferences appear only in the magnitude of regions or are due to limitations of the discrete $y$ values on the lattice\n\tas in (b) where there is no lattice site close enough to the interface to directly match $yk_F=0.5$.\n\tThe inset of (a) reproduces Fig.\\ \\ref{fig:topol_phasediagram_num}.}\n\\end{figure*}\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{figure7}\n\t\\caption{\\label{fig:topol_index_range_ybar}%\n\tPfaffian of the 1D Hamiltonian as a function of magnetic interaction strength $C_m$ for increasing distances $y$ similar to\n\tFig.\\ \\ref{fig:topol_phasediagram}, but for fixed spiral wave number $k_m=k_F$. The interaction strengths are\n\tnormalized to the critical $C_m = C_m ^{\\star} = \\tilde{\\Delta}^{1\/2}$ at which the gap closes at $k_x=0$.\n\tThe Pfaffian changes its sign at both the zero at $C_m=C_m^\\star$ (indicated by the red circle) and the\n\tpole at $C_m=C_m^{\\star\\star}$ (blue square) of $H^\\text{top}_\\text{1D}(y)$.\n\tThe circle corresponds to the cut through the solid line and the square to the cut through the\n\tdashed line in Fig.\\ \\ref{fig:topol_phasediagram} at $k_m=k_F$.\n\tWhile $C_m^\\star$ is independent of $y$, the value of $C_m^{\\star\\star}$ strongly varies with increasing $y$.\n\tAt the large $y$ in (d)\n\tthe overlap of zero and pole is well seen and the zero eliminates the divergence such that the curve\n\tis continuous throughout. This indicates\n\tthe absence of any topological transition at large distances even at $C_m$ values at which panels (a)--(c) show the existence of\n\ta topologically nontrivial phase nearer the impurity chain.\n\t}\n\\end{figure*}\n\n\nWith the physical significance of the $y=0$ Hamiltonian verified, we inspect the further $y$ dependence.\nSince the $H^\\text{top}_\\text{1D}(y)$ are a choice this analysis is principally only qualitative.\nNevertheless we find that the properties underlying the transition from the topology near the chain to the\ntrivial topology in the bulk are governed by physical and plausible mechanisms. For this reason we provide a\ndetailed analysis of the $y$ dependence, in particular as it reveals an interesting picture of the\nextension of the topological regions into space. Furthermore, as we show below a leading role will be played by\nthe zeros of the Green's function (meaning $\\det G = 0$ here) which is otherwise found only for interacting systems \\cite{Gurarie2011,Volovik}.\nThus the family of 1D Hamiltonians $H^\\text{top}_\\text{1D}(y)$ can also be viewed as a simulator\nof features that otherwise occur only in strongly correlated systems. Here we exhibit these features through\nthe means of $H^\\text{top}_\\text{1D}(y)$ but it could similarly be achieved by directly analyzing\n$G(\\omega,k_x,y,y)$ as a class of 1D Green's functions with an effective strong correlation physics whose\ninteraction strength is controlled by $y$.\n\nWe display the topological classification as a function of $y$ in Fig.\\ \\ref{fig:topol_phasediagram},\nwith $y=0$ in Fig.\\ \\ref{fig:topol_phasediagram}(a) repeating Fig.\\ \\ref{fig:2D_topol_phasediagram} for completeness, and with\nincreasing values of $y>0$ in Figs.\\ \\ref{fig:topol_phasediagram}(b)--(e). The insets show corresponding data from the numerical\nsolution of the tight-binding model, repeating Fig.\\ \\ref{fig:topol_phasediagram_num} in the inset of\npanel Fig.\\ \\ref{fig:topol_phasediagram}(a).\nAt large values of $y$ the subgap states are all exponentially suppressed and we expect that $H^\\text{top}_\\text{1D}(y)$\nexhibits only a topologically trivial phase. This is confirmed by Fig.\\ \\ref{fig:topol_phasediagram}(e)\nwhich shows that the topological nontrivial region collapses far from the impurity chain.\nIt is interesting to analyze how this collapse occurs, and we observe in Figs.\\ \\ref{fig:topol_phasediagram}(b)--(e) that it is indeed far from\nbeing simple. Most significant is in Fig.\\ \\ref{fig:topol_phasediagram}(b) the appearance of a second transition line at which for increasing $C_m$ the system\nbecomes again trivial. To understand this behaviour we should notice that the phase diagram of Fig.\\ \\ref{fig:topol_phasediagram}(a) results from the usual\ncrossing of the Fermi level of an eigenvalue of the Hamiltonian.\n\nIn terms of the Green's function a pole then crosses\nthe Fermi level, which coincides with the pole of the $T$ matrix. Since this pole is set by the interaction it is the same\nfor all $y$. This is shown by the solid line in all panels in Fig.\\ \\ref{fig:topol_phasediagram}.\nThe only way the sign of the Pfaffian can then change is when a zero of the Green's function instead of a pole\ncrosses the Fermi level, and the zeros of the Green's functions then mark the transitions to the trivial region at large $y$.\nIn Fig.\\ \\ref{fig:topol_phasediagram} we have marked the crossing of a zero of the Green's function by a dashed line\nto distinguish it from the $y$ independent crossing of the pole shown by the solid line.\nAs $y$ increases the poles and zeros increasingly coincide, causing the topologically nontrivial region eventually to vanish.\n\nTo substantiate these statements let us look first at the condition $\\mathrm{pfaff}[U H^\\text{top}_\\text{1D}(y)] = 0$. Since $\\det(A) = \\mathrm{pfaff}^2(A)$ for any\nskew symmetric matrix $A$ this condition is indeed set by the divergence of $\\det[G(0,0,y,y)]$. Such a divergence occurs through\nthe divergence of $\\det[T(\\omega=0,k_x=0)]$, which is precisely the condition for the existence of a subgap state at\nfrequency $\\omega=0$ and momentum $k_x=0$ used in Part I for the characterization of the subgap spectrum.\nSince $\\omega=0$ this is the same condition as the gap closure condition at $k_x=0$, for which we have determined the critical\ninteraction strength $C_m^\\star$ in Eq.\\ \\eqref{eqn:kx0gapclosure}.\nThus, very close to the interface, the phase transition is governed entirely by the poles of the Green's function.\n\nAs $y$ moves away from the interface the amplitude of the $T$ matrix\nterm in the Green's function at $\\omega=0$ decays exponentially and $H^\\text{top}_\\text{1D}(y)\\rightarrow - [g(0,0,0)]^{-1}$ which is topologically trivial.\nSince the denominators of $G$ are $y$ independent the necessary change of sign of the Pfaffian of $H^\\text{top}_\\text{1D}(y)$ can no longer come from the crossing of a pole of $\\det[G(0,0,y,y)]$. Instead it has to appear from a pole of $H^\\text{top}_\\text{1D}(y)$ itself, when one of the eigenvalues diverges, for instance, to $+\\infty$ and reappears at $-\\infty$. Since $H^\\text{top}_\\text{1D}(y)$ is given by the inverse Green's function, the location of this pole corresponds to a zero of $\\det[G(0,0,y,y)]$.\nIn Fig. \\ref{fig:topol_index_range_ybar} we visualize this effect by plotting the value of the\nPfaffian against magnetic interaction strength $C_m$ for a range of $y$ for $k_m = k_F$. The position of the pole of the Green's function is shown by the circle and the position of the zero of the Green's function (at $y \\neq 0$) by the square. For increasing $y$ the pole and zero converge until they overlap and the system remains topologically trivial for all interactions strengths.\n\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width = \\columnwidth]{figure8}\n\t\\caption{\\label{fig:topol_polelocation_kmkf}Plot of the interaction strengths for the zeros $C_m^{\\star\\star}(y)$ (coloured, thick curve)\n\tand poles $C_m^\\star$ (black, dashed curve, $y$ independent) of the Green's function as a function of $y$ for a range of spiral wave vectors $k_m$.\n\tThe $C_m$ axis is normalized to $C_m^\\star$ which depends on $k_m$. The inset displays the same functions with $y$ normalized to\n\tthe dimensionless oscillating scale $y k_F \\delta_+\/\\pi$ in Eq.\\ \\eqref{eqn:zerolocations}. The plots illustrate the generality\n\tof the topological strips and the possibility to enter a non-topological phase remotely from the impurity chain (within enclosed regions\n\t between the coloured, thick curves and black, dashed curve).}\n\\end{figure}\n\n\nThe condition $\\det\\left[G(0,0,y,y)\\right] = 0$ actually admits an exact solution for the location of this pole in the Pfaffian.\nFrom the exact, full Greens function defined in Eq.\\ \\eqref{eq:G} we obtain\n\\begin{align}\\label{eqn:zerolocations}\n\tC_m^{\\star\\star}\n\t= \\frac{C_m^\\star}%\n\t {\\sqrt{1 + e^{-2|y| k_F \\delta_-} - 2e^{-|y| k_F \\delta_-}\\cos\\left(|y| k_F\\delta_+\\right)}},\n\\end{align}\nwhere $\\delta_\\pm = \\sqrt{2} [(C_m^\\star)^2 \\pm (1- k_m^2\/k_F^2)]^{1\/2}$ and $C_m^{\\star}$ is the magnetic interaction strength at which the Green's function admits a pole, as defined in Eq.\\ \\eqref{eqn:kx0gapclosure}.\nThe value $C_m^{\\star\\star}$ completely determines the additional, dashed phase boundary in\nFig. \\ref{fig:topol_phasediagram} and is marked by the square in Fig. \\ref{fig:topol_index_range_ybar}.\nIn Fig. \\ref{fig:topol_polelocation_kmkf} we show $C_m^{\\star\\star}$\nin comparison with $C_m^\\star$ as a function of $y$ for a selection of spiral wave vectors $k_m$.\n\nEquation \\eqref{eqn:zerolocations} shows that the topological phase diagram is governed by two dimensionless parameters. One set by\n$y k_F \\delta_-$ providing how $C_m^{\\star\\star}$ approaches $C_m^\\star$ away from the interface as a function of $k_m$ and $y$,\nand one set by $y k_F \\delta_+\/\\pi$ describing the oscillations of $C_m^{\\star\\star}$ about $C_m^\\star$. These length\nscales arise from the natural scales of the Green's function given by Eq.\\ \\eqref{eq:g_k_y}.\nIndeed we have $\\delta_+ = \\mathrm{Re}(p_{\\sigma,\\pm})$ and $\\delta_- = \\pm \\mathrm{Im}(p_{\\sigma,\\pm})$,\nwhere $p_{\\sigma,\\pm}$ is taken at $\\omega=0$ and $k_x=0$ at which it is independent of $\\sigma$ and $p_{\\sigma,+}=p_{\\sigma,-}^*$.\nTherefore $\\delta_+$ sets naturally the oscillatory behaviour in $C_m^{\\star\\star}$ and $\\delta_-$ the exponential convergence at longer distances.\n\nThe oscillations of $C_m^{\\star\\star}$ about $C_m^\\star$ lead to the interesting consequence observed in\nFigs. \\ref{fig:topol_phasediagram} and \\ref{fig:topol_index_range_ybar} that when moving away\nfrom the impurity chain the topological Hamiltonian $H^\\text{top}_\\text{1D}(y)$ changes its topological classification several times before\nsettling in the topologically trivial phase.\nThis means that there are strips near the chain that can\nbe considered as alternatively trivial and nontrivial, with a width of the strips set by half of the\noscillation scale $\\Delta y \\sim \\pi\/k_F \\delta_+$. The universality of the latter scale is shown\nby the inset in Fig. \\ref{fig:topol_polelocation_kmkf}.\nWe notice in particular that in Fig. \\ref{fig:topol_phasediagram} (c), at $k_m \\lesssim 0.8 k_F$,\nentrance to the topological phase is triggered by a zero of the Green's function rather than a pole. This highlights the\nfact that it is possible for strips at particular $y \\neq 0$ to become nontrivial \\emph{before} the interface at $y=0$\nitself does as $C_m$ is tuned and without any requirements at all on subgap states.\nThis can be clearly seen in Fig.\\ \\ref{fig:topol_polelocation_kmkf} where there are large regions of space where\n$C_m^{\\star\\star}0$, which will almost\nalways be the case for biological databases, Chv{\\'a}tal shows the existence\nof a nonadaptive method using only \n\\[\nG=(2+\\epsilon)n \\frac{1+2\\log c}{\\log n - \\log c}\n\\]\nguesses.\nIn fact, he shows that making $G$ guesses at random will be sufficient to\ndetermine a unique solution with high probability, using only the\n$b(Q_i)$ type of scores.\nUnfortunately, this existence proof does not immediately lead to a\npolynomial-time algorithm.\nIndeed, it is NP-complete to determine if a collection of Mastermind guesses\nwith $b(Q_i)$ type of responses is satisfiable~\\cite{g-oacmg-09}. Nonetheless,\nin this paper, we will show that Mastermind attacks based on reductions to\ngroup testing can efficiently clone a sparse database of interest\nusing a sublinear number of nonadaptive queries.\n\n\\subsection{Related Privacy Models}\n\nFollowing a framework by \nBancilhon and Spyratos~\\cite{bs-pirdd-77},\nDeutsch and Papakonstantinou~\\cite{dp-pidp-05}\nand Miklau and Suciu~\\cite{ms-faidd-07}\ngive related models for\ncharacterizing privacy loss in information releases from a database, \nwhich they call \\emph{query-view security}.\nIn this framework, there is a\nsecret, $\\cal S$, that the data owner, Alice, is trying to protect.\nAttackers are allowed to ask legal queries of the database,\nwhile Alice tries to protect the information that these\nqueries leak about $\\cal S$.\nWhile this framework is related to the data-cloning attack,\nthese two are not identical, since in the data-cloning attack\nthere is no specifically sensitive part of the\ndata. Instead, Alice, is trying to limit releasing too much of her data\nto Bob rather than protecting any specific secret.\nSimilarly, \nKantarcio\\v{g}lu {\\it et al.}~\\cite{kjc-wdmvp-04} study privacy models that quantify\nthe degree to which data mining searches expose private information, but this\nrelated privacy model is also not directly applicable to the data-cloning attack.\n\nThere has been considerable recent work on data modification\napproaches that can help protect the privacy or\nintellectual property rights of a database by modifying its content.\nFor example, several researchers\n(e.g., see~\\cite{ak-wrd-02,ahk-swrd-03,g-qpwrd-03,%\nsv-hcwsd-04,sap-rprd-03,s-rard-07})\nadvocate the use of data watermarking to protect data rights.\nIn using this technique, data values are altered to make it easier,\nafter the fact, to track when someone has stolen information\nfrom a database.\nOf course, by that point, the data has already been cloned.\nAlternatively, several\nother researchers (e.g.,~\\cite{lefevre05,samarati01,samarati98,mw-cok-04,%\nafkmptz-at-05,bkbl-ekuct-07,zyw-pekcd-05,ba-dpoka-05}) propose\nusing \\emph{generalization} or \\emph{cell suppression}\nas methods for \nachieving quantifiable privacy-preservation in databases.\nThese techniques alter data values to protect sensitive parts of the data,\nwhile still allowing for data mining activities to be performed on the\ndatabase.\nWe assume here that Alice is not\ninterested in data modification techniques, however, for we believe that \naccuracy is critically important in several database applications.\nFor example, even a single\nbase-pair mutation in a DNA string can indicate the existence of an increased\nhealth risk.\n\nAs mentioned above, we allow for the queries Bob asks to be answered\nusing SMC protocols, which reveal no additional information between the query\nstring $Q$ and each database string $X_i$ other than the response score $r_i$.\nSuch protocols have been developed for\nthe kinds of comparisons that are done in\ngenomic sequences (e.g., see~\\cite{akd-spsc-03,da-smc-01,FNP04}).\nIn particular, Atallah {\\it et al.}~\\cite{akd-spsc-03} and\nAtallah and Li~\\cite{al-sosc-05}\nstudied\nprivacy-preserving protocols for edit-distance sequence comparisons, such as\nin the longest common subsequence (LCS) \nproblem (e.g.,~\\cite{h-lsacm-75, ir-acvlc-08, uah-bclcs-76}).\nTroncoso-Pastoriza {\\it et al.}~\\cite{tkc-pperd-07}\ndescribed a privacy-preserving protocol for regular-expression searching \nin a DNA sequence.\nJha {\\it et al.}~\\cite{jks-tppgc-08} give\nprivacy-preserving protocols for computing edit distance and\nSmith-Waterman similarity scores between two genomic sequences, improving\nthe privacy-preserving algorithm of Szajda {\\it et al.}~\\cite{spol-tpdp-06}.\nAligned matching results between two strings can be done in a\nprivacy-preserving manner, as well, using privacy-preserving set\nintersection protocols (e.g., \nsee~\\cite{ae-nepps-07,FNP04,vc-ssica-05,ss-ppsip-07,ss-ppsib-08})\nor SMC methods for\ndot products (e.g., see~\\cite{dfknt-uscfm-06,gmw-hpamg-87,y-psc-82}).\nIn addition, the Fairplay system~\\cite{bnp-fssmp-08} provides a general\ncompiler for building such computations.\n\nDu and Atallah~\\cite{da-psrda-01} study an SMC protocol\nfor querying a string $Q$ \nin a database of strings, ${\\cal X}$, as in our framework,\nwhere comparisons are based on approximate\nmatching (but not sequence-alignment).\nTheir SMC protocols for performing such queries provide\na best match, not a score for each string in the database.\nThus, their scheme would not be applicable in the attack framework we are\nconsidering in this paper.\nThe SMC method of Jiang {\\it et al.}~\\cite{jmcs-sddli-08}, on the other hand,\nis directly applicable.\nIt provides a vector of scores comparing a string (or vector) \n$Q$ to a sequence of\nstrings (or vectors), as we require in this paper.\nThus, our Mastermind methods can be viewed as an attack on repeated use of the\nSMC protocol of Jiang {\\it et al.}\n\nGoodrich~\\cite{g-magd-09} studies the \nproblem of discovering a single DNA string from \na series of genomic Mastermind queries.\nAll his methods are sequential and adaptive, however, so the only way they\ncould be applied to the data-cloning attack on an entire biological database is\nif Bob were to focus on each string $X_i$ in ${\\cal X}$ in turn. \nThat is, he would have to gear his\nqueries to specifically discovering each $X_i$ \nin $n$ distinct ``rounds''\nof computation, each of which uses a lot of string-comparison queries. \nSuch an adaptation of Goodrich's Mastermind attacks to perform data cloning,\ntherefore, would be\nprohibitively expensive for Bob. \nOur approach, instead, is based on performing a nonadaptive Mastermind\nattack on the entire database at once.\n\nWe note that others have investigated de-anonymization techniques on both\nsocial networks~\\cite{backstrom2007} and Netflix data~\\cite{narayanan2008}.\nThese works are complementary to our goal of cloning the databases themselves.\n\n\n\n\\section{Exploiting Sparsity in an Algorithmic Data-Cloning Attack}\n\\label{sec:sparsity}\nIn this section, we describe the details of our nonadapative Mastermind data-cloning attack.\nIt is often the case that all or a large fraction of the strings in a\nreal-world string database can be \ncharacterized in terms of a small number of differences with a public\nreference string. In these situations, which are quite common, we can\napply a reduction to nonadaptive group testing, which results in an\nan efficient Mastermind attack as we will see.\n\n\\subsection{Non-adaptive Combinatorial Group Testing}\n\\emph{Group testing} \nwas introduced by Dorfman~\\cite{d-ddmlp-43}, during World War II,\nto test blood samples.\nThe problem he addressed was to design an efficient way to detect the\nfew thousand blood samples that were contaminated with syphilis out\nof the millions that were collected.\nHis idea was to pool drops of blood from multiple samples and test\neach pool for the syphilis antigen. By carefully arranging the\ngroup tests and then discovering which groups\ntested positive and which ones tested negative \nhe could then identify the contaminated\nsamples using a small number of group tests (much\nsmaller than the number needed to explicitly test each individual blood sample),\nthereby sparing thousands of G.I.'s from needless disease exposure.\nIn this paper, we show that Dorfman's humanitarian discovery has an\nunfortunate dark side when it comes to privacy protection, \nfor it enables privacy leaks to be amplified in a data-cloning attack.\n\nIn the \\emph{combinatorial group testing} problem\n(e.g., Du and Hwang~\\cite{dh-cgtia-00}), one is given a set $S$ of\n$n$ items, at most $d$ of which are ``defective,''\nfor some parameter $d\\le n$, and one is interested in\nexactly determining which of the items in $S$ are defective.\nOne can form a test from any subset $T$ of $S$ and in a single step\ndetermine if $T$ contains any defective items or not.\nIf one can use information from the result of a test in formulating\nthe tests to make in the future, then the method is said to be\n\\emph{adaptive}. If, on the other hand, one cannot use the results\nfrom one test to determine the makeup of any future test, then the\nmethod is said to be \\emph{nonadaptive}.\nFor the application to the Mastermind attack, we are interested in\nnonadaptive methods.\n\nThere are several existing nonadaptive group testing methods~\\cite{dh-cgtia-00}, \nbut these approaches are meant for a more general context than \nin our database cloning attack.\nIn particular, these methods are designed to work for \\emph{any} set\nof items having $d$ defective members.\nIn our case, we are instead interested in specific sets of items that\nare derived from the database we are interested in cloning.\nBecause of this, we can, in fact, derive improved bounds than would\nbe implied by existing combinatorial group-testing methods.\n\nSuppose we are given a collection, $\\cal C$,\nof sets, \n$\n{\\cal C} = \\{S_1,S_2,\\ldots, S_g\\}$,\nwhich are not necessarily distinct, such that each set $S_i$\ncontains $n$ items, at most $d$ of which are ``defective.''\nWe want to design a nonadaptive group testing scheme that can\nexactly identify the subset, $D_i$, of\nat most $d$ defective items in each set $S_i$ in $\\cal C$.\nOur approach to solving this problem\nis based on a \nrandomized approach used by Eppstein {\\it et al.}~\\cite{egh-icgtr-05}.\n\nA nonadaptive group testing algorithm\ncan actually be viewed as a $t \\times n$ 0-1 matrix, $M$.\nEach of the $n$ columns of $M$ corresponds to one of the $n$ items\nand each of the $t$ rows of $M$ represents a test. \nIf $M[i,j]=1$, then item $j$ is included in test $i$, and\nif $M[i,j]=0$, then item $j$ is not included in test $i$.\nSince this is a nonadaptive testing scheme, \nwe assume that\nno test depends on the results of any other.\nThat is, every row of the matrix $M$ is defined in advance of any\ntest outcomes.\nThe analysis question, then, is to determine how large $t$ must be \nfor the results of these tests to provide useful results.\n\nLet $C$ denote the set of columns of $M$.\nGiven a subset $D$ of $d$ columns in $M$, and a specific column $j$\nin $C$ but not in $D$,\nwe say that $j$ is \\emph{distinguishable} from $D$ if there is a row\n$i$ of $M$ such that $M[i,j]=1$ but $i$ contains a $0$ in each of the\ncolumns in $D$.\nIf each column of $M$ that is in $C$ and not in $D$ is \ndistinguishable from $D$, then we say that $M$ is \n\\emph{$D$-distinguishing}.\nFurthermore, we generalize this definition, so that\nif $M$ is $D_i$-distinguishing for each subset, $D_i$, in a collection,\n$\n{\\cal D}=\\{D_1,D_2,\\ldots,D_g\\}$, \nof columns in $C$,\nthen we say that $M$ is $\\cal D$-distinguished.\nFinally, we say that the matrix $M$ is $d$-{\\it disjunct} \n(e.g., see Du and Hwang~\\cite{dh-cgtia-00}, p.~165)\nif it is $\\cal D$-distinguished for the collection, $\\cal D$,\nof all of the \n$\n{n \\choose d}\n$\nsubsets of size\n$d$ of $C$.\n\nNote that if $M$ is $D$-distinguishing, \nthen it leads to a simple testing\nalgorithm with respect to $D$.\nIn particular, suppose $D$ is the set of defective items\nand we perform all the tests in $M$.\nNote that, since $M$ is $D$-distinguishing,\nif an item $j$ is not in $D$, then there is a test\nin $M$ that will determine the item \n$j$ is not defective, for $j$ would belong to a test that must\nnecessarily have no defective items.\nSo we can identify $D$ in this case---the set $D$ consists of\nall items that have no test determining them to be nondefective.\n\nOf course, if $M$ is $d$-disjunct, then this simple detection algorithm works \nfor any set $D$ of up to $d$ defective items in $C$.\nUnfortunately, building such a matrix $M$ that is $d$-disjunct\nrequires $M$ to have $\\Omega(d^2\\log n\/\\log d)$ \nrows~\\cite{dh-cgtia-00,R94}.\nSo we will instead build a matrix that is $\\cal D$-distinguished for\nthe collection, $\\cal D$, of defective subsets determined by the sets\nof items in $\\cal C$, with high probability.\n\nGiven a parameter $t$, which is a multiple of $d$,\nwe construct a $2t\\times n$ matrix $M$ as follows.\nFor each column $j$ of $M$, we choose $t\/d$\nrows uniformly at random and we set the values of these entries to $1$, with\nthe other entries in column $j$ being set to $0$.\nNote, then, that for any set $D$ of up to $d$ defective items,\nthere are at most $t$ tests that will have positive outcomes\n(detecting defectives) and,\ntherefore, at least $t$ tests that will have negative outcomes.\nOur desire, of course, is for \ncolumns that correspond to samples that are distinguishable from\nthe defectives ones should belong to at least one negative-outcome test.\nSo, let us focus on bounds for $t$ that allow for such a matrix $M$\nto be chosen with high probability.\n\nLet $C$ be a set of (column) items having a fixed \nsubset $D$ of $d$ defective items.\nFor each (column) item $j$ in $C$ but not in $D$, let $Y_j$ denote the\n0-1 random variable that is $1$ if $j$ is falsely identified as\na defective item by $M$ (that is, $j$ is not included in a test of\nitems distinguished from those in $D$).\nLet $Y_j$ be $0$ otherwise.\nObserve that the $Y_j$'s are independent, since $Y_j$ depends only on\nwhether the choice of rows we picked for column $j$ collide with the\nat most $t$ rows of $M$ picked for the columns corresponding\nto items in $D$.\nThere are a total of $2t$ rows, at most $t$ of which contain a test\nwith a defective item.\nThus, the probability of any non-defective item joining any\nparticular test having a defective item in it is at most $1\/2$;\nhence, any $Y_j$ is $1$ (a false positive)\nwith probability at most $2^{-t\/d}$, since each item is included in \n$t\/d$ tests at random.\n\nLet $Y=\\sum_{j=1}^n Y_j$, and note that\nthe expected value of $Y$, $E(Y)$, is at most ${\\hat\\mu}=n\/2^{t\/d}$.\nThus, if ${\\hat\\mu}\\le 1$,\nwe can use Markov's inequality to bound\nthe probability of the (bad) case when $Y$ is non-zero as follows:\n\\[\n\\Pr(Y\\ge 1) \\le E(Y) \\le {{\\hat\\mu}} = \\frac{{n}}{2^{t\/d}} .\n\\]\nThus, if we set\n\\[\nt\\ge 2d\\log n,\n\\]\nthen $M$ will be $D$-distinguishing with probability\nat least $1-1\/n$, for any particular subset of defective items, $D$,\nfrom a set $C$ of $n$ items.\nLikewise, if we set \n\\[\nt\\ge 2d\\log n+d\\log g,\n\\]\nthen $M$ will be $\\cal D$-distinguished, with probability at least $1-1\/n$,\nfor the collection of $g$ subsets of defective items determined by\nthe sets in $\\cal C$.\nFinally, we can use the fact \n(e.g., see Knuth~\\cite{k-acp-73})\nthat \n\\[\n{n\\choose d}<(en\/d)^d,\n\\]\nso that\nif we set \n\\[\nt\\ge 2d\\log n + d^2\\log (en\/d),\n\\]\nthen $M$ will be\n$d$-disjunct with probability at least $1-1\/n$, which implies $M$ will work\nfor any subset of at most $d$ defective items.\nTherefore, we have the following.\n\n\\begin{theorem}\n\\label{thm:math}\nIf \n\\[\nt \\ge 2d\\log n + \\min\\{d\\log g, d^2\\log (en\/d)\\},\n\\]\nthen a $2t\\times n$ random matrix $M$,\nconstructed as described above,\nis $\\cal D$-distinguished, with \nprobability at least $1-1\/n$, for any given collection, \n${\\cal D}=\\{D_1,D_2,\\ldots,D_g\\}$, of $g$\nsubsets of size $d$ of the $n$ columns in $M$.\n\\end{theorem}\n\\begin{proof}\nLet $\\cal D$ be a given collection of $g$ (not necessarily distinct)\nsubsets of size $d$ of the $n$ columns in $M$.\nIf \n\\[\nd^2\\log (en\/d) > d\\log g, \n\\]\nthen $M$ is $\\cal D$-distinguished by construction, with probability\nat least $1-1\/n$.\nIf, on the other hand,\n\\[\nd^2\\log (en\/d)\\le d\\log g,\n\\]\nthen $M$ constructed as above\nis $d$-disjunct, with probability at least $1-1\/n$,\nwhich implies it is $\\cal D$-distinguished w.h.p.~for any \ncollection $\\cal D$ of subsets of size $d$ \nof the $n$ columns of $M$.\n\\end{proof}\n\nAs mentioned above,\nthis is a way of constructing a simple nonadaptive group testing method for\nidentifying the defective items in the collection, $\\cal D$, of\nsubsets of up to $d$ defective items determined by the sets in $\\cal C$.\n\n\\subsection{Reducing Mastermind to Group Testing}\nIn this section,\nwe describe how to use nonadaptive group testing to\nconstruct an efficient Mastermind cloning attack.\n\\label{sec:arb}\nConsider the case when ${\\cal X}$ is a database of $g$ strings of length\n$n$ each, with each of them having\nat most $d\\le n$ differences with a reference string $R$.\nWe assume that each string in ${\\cal X}$ is drawn from an alphabet of $c$\ncharacters, which, without loss of generality, we assume are integers in the\nrange $[0,c-1]$.\n\nSuppose, like before, that we have a $2t\\times n$ \nnonadaptive group testing matrix, $M$, for a \nset of size $n$ having at most $d$ defectives,\nwhere \n\\[\nt\\ge 2d\\log n + \\min\\{\\log g, d^2\\log (en\/d)\\}.\n\\]\nAs before,\nwe begin our general Mastermind cloning attack by making a query \nfor the reference string, $R$.\nLet $r$ be the response score for the query for $R$.\nNext, we create $c-1$ different string queries, $Q_{k,l}$ \nfor each of the $2t$ tests in $M$, defined, for $l=1,2,\\ldots,c-1$, as\nfollows:\n\\[\nQ_{k,l}[j] = \\left\\{ \\begin{array}{ll}\n\t\tR[j] & \\mbox{if $M[k,j]=0$} \\\\\n\t\t(R[j] + l) \\bmod c & \\mbox{else.}\n\t\t\\end{array}\n\t\t\\right.\n\\]\nEach such query against a string $X_i$ will have some response, $r_{k,l,i}$.\nWe interpret test $(k,l,i)$ as having a ``positive'' response, that is, it\ndoes not detect a defective, if, in making the comparison of $Q_{k,l}$ with\nthe string $X_i$, the response\n\\[\nr_{k,l,i}=r-b_{k,0,i},\n\\]\nwhere $b_{k,0,i}$ is the number of\ncharacters in $X_i$ matching their associated (color-$0$) location in $R$\nat places where there are $1$'s in row $k$ of $M$.\nIntuitively, each $1$ in row $k$ of $M$ indicates a place where we test\na deviation from the reference value in $R$ at that location to \nthe color $l$ away. If none of these\nlocations is a match with the current $X_i$ string, then none of these \nlocations take a color that is $l$ additive colors from their reference value. \nIn other words, defective ``items'' in the\nassociated group testing method \ncorrespond to locations where $X_i$ differs from the reference string\nwith characters that are exactly $l$ away from their reference values. \n\nOf course, being able to determine if such a test for $Q_{k,l}$ against\nstring $X_i$ is ``positive'' or\n``negative'' requires that we know the value $b_{k,0,i}$, which we don't \nimmediately know.\nWe do immediately know the number, $b_k$, \nof $1$'s in row $k$ of $M$, however. And, after\nwe perform the queries for each $Q_{k,l}$ against a string $X_i$, \nwe learn each response $r_{k,l,i}$.\nThat is, we have $c$ linear equations in $c$ unknowns from these\nqueries and their responses. \nSpecifically, we have the equation,\n$\nb_k = b_{k,0,i} + b_{k,1,i} + \\cdots + b_{k,c-1,i}$,\nwhere $b_{k,l,i}$ denotes the number of places $j$ where there is a \n$1$ in row $k$ of $M$ and the character in position $j$ of $X_i$ is \n$l$ away from the reference, that is, places\nwhere $X_[j] = (R[j] + l)\\bmod c$ and $M[k,j]=1$.\nWe also have $c-1$ equations,\n\\[\nr_{k,l,i} = r - b_{k,0,i} + b_{k,l,i},\n\\]\nfor $l=1,2,\\ldots,c-1$,\nwhich can each be rewritten as\n$\nb_{k,l,i} = r_{k,l,i} - r + b_{k,0,i} $.\nThis allows us to rewrite\n\\[\nb_k = c\\,b_{k,0,i} - (c-1)r + \\sum_{l=1}^{c-1} r_{k,l,i}.\n\\]\nThus, we can determine the value of $b_{k,0,i}$ as\n\\[\nb_{k,0,i} = \\frac{b_k + (c-1)r - \\sum_{l=1}^{c-1} r_{k,l,i}}{c} ,\n\\]\nwhich in turn tells us which of the $Q_{k,l}$ tests are ``positive'' and\nwhich ones are ``negative.''\nEssentially, we are performing a combinatorial group test for each possible\nshift we can make from a color in reference $R$.\n\nThus, if there are at most $d$ locations where $X_i$ differs from the\nreference string and $M$ is $\\cal D$-distinguished for the set of at most $d$\nlocations of difference for each string in ${\\cal X}$, then this scheme will learn\nthe complete identity of each string in ${\\cal X}$.\nThat is, this method will clone ${\\cal X}$, with high probability.\nTherefore, by Theorem~\\ref{thm:math}, we have the following:\n\n\\begin{theorem}\nGiven a database\n$ {\\cal X} = (X_1,X_2,\\ldots,X_g)$,\nof strings of length $n$ defined over an alphabet of size $c$,\nthere is a nonadaptive Mastermind cloning method that can discover each\nstring in $\\cal X$,\nusing $2t(c-1)$ tests, with probability at least $1-(c-1)\/n$, where\n$ t $ is the smallest multiple of $d$ such that \n\\[\nt\\ge 2d\\log n + \\min\\{d\\log g, d^2\\log (en\/d)\\},\n\\]\nand $d\\le n$ is\nthe maximum number of differences any string in ${\\cal X}$ has with $R$.\n\\label{thm:sparse}\n\\end{theorem}\n\n\n\n\\section{Case Studies}\n\\label{sec:experiments}\n\nTo test the real-world risks of the nonadaptive Mastermind cloning attack, we applied \nour methods to case studies involving random samples from\na number of real-world string and vector databases,\nincluding genomic and social network data. \nWe briefly describe the data sets \nused and then discuss experimental results which reveal that relatively few \ntests are needed to recover large proportions of each database. \n\n\\begin{table}[hbt!]\n\\centering\n\\begin{tabular}{|l|c|c|c|c|c|c|c|}\n\\multicolumn{1}{l}{ Name} \n&\\multicolumn{1}{c}{Strings} \n&\\multicolumn{1}{c}{Length}\n&\\multicolumn{1}{c}{Max Diff}\n&\\multicolumn{1}{c}{Colors} \\\\\n\\hline\nGenomic & 457 & 16,568 & 492 & 4 \\\\\nNetflix & 1,000 & 17,700 & 1988 & 6 \\\\\nEpinions & 2,000 & 131,827 & 517& 3\\\\\nSlashdot & 2,000 & 82,144 & 378 & 3\\\\\nSlashdot (All) & 82,144 & 82,144 & 428 & 3\\\\\nFacebook-UNC & 18,163 & 18,163 & 3,795 & 2\\\\\nFacebook-Unif & 1,000 & 72,261,577 & 2,164 & 2\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{figure}[hbt!]\n\\centering\n\\quad\\includegraphics[scale=0.3]{Figures\/histGenomic.pdf} \\quad\n\\includegraphics[scale=0.3]{Figures\/histNetflix.pdf} \\\\ \\quad \\\\\n\\includegraphics[scale=0.3]{Figures\/histEpinions.pdf} \\qquad\n\\includegraphics[scale=0.3]{Figures\/histSlashdot.pdf} \\\\ \\quad \\\\\n\\includegraphics[scale=0.3]{Figures\/histFacebookUNC.pdf} \\quad\n\\includegraphics[scale=0.3]{Figures\/histFacebook.pdf}\n\\caption{Data sets used in experiments, along with histograms of differences from reference $R$.\nThese data sets have varying characteristics.}\n\\label{tbl:data}\n\\end{figure}\n\n\n\\subsection{Data Sets}\n\nWe analyze several different data sets with varying characteristics to test our approach.\nFor each data set, Figure \\ref{tbl:data} shows the number of strings $g$, string length $n$, \nmaximum difference $d$ from the reference $R$ across strings (where ``difference'' is \ndefined as the number of entries in which the string differs from $R$), \nand the number of unique colors $c$ present in the database.\n\nThe Genomic database consists of 457 human mitochondrial sequences \ndownloadable from GenBank\\footnote{\\url{http:\/\/www.ncbi.nlm.nih.gov\/Genbank\/index.html}}. \nWe use the Revised Cambridge Reference Sequence (rCRS), of length 16,586 bp \nas the reference string $R$. Figure \\ref{tbl:data} shows the distribution of \nsequence differences from $R$, which reveals that the differences from $R$\nare relatively few and are concentrated at several different modes. In this data,\nthere are four colors, namely the nucleotides A, C, T, and G.\n\nOur movie-rating database is taken from the Netflix Prize\ndata\\footnote{\\url{http:\/\/www.netflixprize.com}}, \nwhich consists of 100 million movie ratings and 480,189 different Netflix \nusers. In our experiments, we use a representative subset of 1,000 randomly selected \nusers. Each user has an associated string over 17,770 movies, where each position \n$i$ stores the rating (from 1 to 5) given by the user for movie $i$. An entry of 0\nsignifies that the user has not rated that movie. Thus, there are six different unique\ncolors in this database (0-5). Our reference string $R$ consists of all zeros, \nrepresenting the case where no movies are rated. According to \nFigure \\ref{tbl:data}, the majority of users rate less than 300 movies. This \nsparsity allows the group testing attacks to be very efficient, as we will see in \nthe experiments.\n \nWe also analyze online social networks such as Epinions, Slashdot, and Facebook. \nAvailable from the SNAP Library\\footnote{\\url{http:\/\/snap.stanford.edu\/data\/index.html#signnets}},\nEpinions and Slashdot are ``signed'' networks, where positive and \nnegative links appear in the network's adjacency matrix \\cite{Leskovec2010}. The \nEpinions network is the site's ``Web of Trust'' where users specify the other users that they \ntrust or distrust. Similarly, in the Slashdot network, users can specify both ``friends'' and ``foes''. \nHence, in both these databases, there are three unique colors: 0 (no\nlink), 1 (good link), and -1 (bad link). In our experiments for both Epinions and Slashdot, \nwe select a random subset of 2,000 users and utilize the corresponding rows in the adjacency matrix \nas our database. \nWe also simulate a single large-scale group testing attack on the \nentire Slashdot-All adjaency matrix with 82,144 users. \n \nThe two Facebook data sets that we analyze are Facebook-Uniform and Facebook-UNC.\nFacebook-Uniform, provided by the authors of \\cite{facebook}, is an unbiased sample\nof 957K unique users obtained by performing Metropolis-Hastings random walks over the Facebook network. \nEach user is associated with a (sparse) binary vector of size 72 million which denotes adjacencies. \nWe restrict ourselves to a random subset of 1,000 users in Facebook-Uniform. Meanwhile, Facebook-UNC \nis a self-contained Facebook network of approximately 18,000 students at the \nUniversity of North Carolina at Chapel Hill \\cite{facebook2}. \n\nFor all the social network data sets, we use a reference\nstring $R$ of all zeros.\nFigure \\ref{tbl:data} shows that these networks are also sparse, which is often the case in many real-world settings. \n\n\\subsection{Experiments}\n\nOur experimental approach is based on the analysis in Section \\ref{sec:sparsity}. Similar to \nrandomly selecting $\\frac{t}{d}$ rows from $2t$ rows (for each column in the nonadaptive \ngroup matrix $M$), we stochastically set each entry in $M$ to 1 with \nprobability $p = \\frac{1}{2d}$. This procedure enables us to add additional tests \nto $M$ until the string is cloned or until a cutoff of $100,000 * c$ tests is reached, where\n$c$ is the number of unique colors in the database. We initialize with the same \nrandom seed for each string, ensuring that the same exact tests are performed on \neach string. This scheme allows us to determine the actual number\nof tests needed to clone the strings. \n\n\\begin{table}\n\\caption{Theoretical number of tests needed to clone entire database (a) by baseline method \n(b) by nonadaptive Mastermind attack.}\n\\begin{center}\n\\begin{tabular}{l|c|c}\n & Baseline & Mastermind \\\\\n\\hline\nGenomic & 49,704 & 76,752 \\\\\nNetflix & 88,500 & 536,760\\\\\nEpinions & 263,654 & 66,176 \\\\\nSlashdot & 164,288 & 46,872 \\\\\nSlashdot (All) & 164,288 & 58,208 \\\\\nFacebook-UNC & 18,163 & 227,700 \\\\\nFacebook-Uniform & 72,261,577 & 190,432 \\\\\n\\end{tabular}\n\\end{center}\n\\label{tbl:theoretical}\n\\end{table}\n\nBefore delving into the experimental results, we report in Table \\ref{tbl:theoretical} the \ntheoretical number of tests needed to clone the entire database with high probability, \nusing the nonadaptive Mastermind technique. These numbers are based on $n$, $g$, $d$, $c$, and the \nbound in Theorem \\ref{thm:sparse}. Table \\ref{tbl:theoretical} also shows the number of tests needed \nby a baseline technique to exactly clone the entire database. This baseline technique\ngenerates tests based on the reference $R$. For each entry $j$ within $R$, \nand for each color offset $l$, a test is created where the entry $j$ in $R$ is replaced \nwith its color offset $l$, namely $(R[j] + l)\\mod c$. Thus, the baseline method needs \n$(c-1) * n$ tests to recover the database. Interestingly,\nthe baseline technique can beat the theoretical bound (with $d$) when $n$ is small, \nas is the case for the Genomic, Netflix, and Facebook-UNC data.\n\nFortunately, our Mastermind attack can take advantage of the sparsity in the \ndata to improve its efficiency. Since each string's distance from $R$ is usually much smaller \nthan $d$, it is empirically advantageous to use a target $\\hat{d}$ that is much smaller \nthan $d$. For instance, the Netflix data has a maximum difference $d=1988$, but the mean \ndifference from $R$ is $d_{\\text{mean}}=202$ and the median is \n$d_{\\text{median}}=92$. Thus, there are different possible choices for $\\hat{d}$.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[scale=0.32]{Figures\/medianGenomic.pdf} \\qquad\n\\includegraphics[scale=0.32]{Figures\/medianNetflix.pdf} \\\\ \\quad \\\\\n\\includegraphics[scale=0.32]{Figures\/medianEpinions.pdf} \\quad\n\\includegraphics[scale=0.32]{Figures\/medianSlashdot.pdf} \\quad\n\\includegraphics[scale=0.32]{Figures\/medianFacebookUNC.pdf} \n\\caption{Mean and median number of tests required until string is cloned (averaged across all strings in database), for various settings of target distance $\\hat{d}$. Typically, it is advantageous to \nset $\\hat{d}$ to be much less than $d$, since most of the vectors are sparse and are close to the reference R.}\n\\label{fig:target}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[scale=0.32]{Figures\/dotsGenomic.pdf} \\qquad\n\\includegraphics[scale=0.32]{Figures\/dotsNetflix.pdf} \\\\ \\quad \\\\\n\\includegraphics[scale=0.32]{Figures\/dotsEpinions.pdf} \\quad\n\\includegraphics[scale=0.32]{Figures\/dotsSlashdot.pdf} \\quad\n\\includegraphics[scale=0.32]{Figures\/dotsFacebookUNC.pdf} \n\\caption{Number of tests required to clone each string, ordered by the \nstring's distance from R. Each string is represented by a dot. While the number of tests increases\nrapidly for small $\\hat{d}$ when the vector is far from $R$, note that many vectors are close to $R$, allowing for a majority of the database to be captured quickly.}\n\\label{fig:dot}\n\\end{figure*}\n\n\nFor each data set (excluding Slashdot-All and Facebook-Uniform due to their large scale), Figure \\ref{fig:target} shows the number of tests needed to exactly clone a string (averaged across all strings in the database), as a function of $\\hat{d}$. \nIn a few instances, when the strings are very far from $R$, the algorithm may reach the \ncutoff value, causing the mean to be undervalued; thus, we also plot \nthe median number of tests since the median is more robust to outliers. Generally, we see that \nmean and median number of required tests decreases as $\\hat{d}$ is decreased from $d$. For \ninstance, for the Slashdot database, the mean\/median number of tests is 18,000 \nif $\\hat{d} = d = 378$, but if $\\hat{d} = 50$, the mean\/median number of tests is 3,000 and if\n$\\hat{d} = d_{\\text{mean}} = 13$, the median number of tests requred is 700. Sometimes, the \nmean number of tests increases if $\\hat{d}$ is too small though. If \n$\\hat{d} = d_{\\text{mean}} = 13$, the mean number of tests required is around 4,000. Thus, there is \na tradeoff. If $\\hat{d}$ is too small, it would take longer to exactly clone a string that is \nfar away from $R$. If $\\hat{d}$ is too large (e.g. $\\hat{d}=d$), then many inefficient tests would \nbe performed on strings that are close to $R$. \nWe assume that a good estimate for $\\hat{d}$ (such as the median \ndistance from $R$) can be obtained a priori, e.g. through scientific knowledge in the case of \nthe Genomic database, or publicly available information in the cases of Netflix, Epinions, \nSlashdot, and Facebook.\n\n\nWe also investigate the\nrelationship between the number of required tests and the vector's\ndistance from $R$. In Figure \\ref{fig:dot}, we observe that the number of \ntests required to clone a vector is very low (and nearly constant) \nwhen the vector's distance from R is itself low and close to $\\hat{d}$. As the vector's distance increases, the number \nof required tests grows more quickly due to the mismatch between the \ndistance and $\\hat{d}$. For each data set,\nwe display different scatter plots for different settings of $\\hat{d}$. For instance,\nfor the Slashdot data, the number of tests is relatively constant across all distances\nwhen the $\\hat{d}=200$; however, at this setting, the number of required tests is at least 10,000,\neven when the vector is close to the reference $R$. In contrast, when $\\hat{d}=3$, the number of required tests\nis only in the hundreds, around the vicinity of $\\hat{d}$; however, when the vector's distance from $R$ is significantly\ngreater (e.g. over 100), the scatter plot increases dramatically. It is important to note that most vectors are\nclose to $R$ due to the sparsity of the data, and thus, even when the scatter plot dramatically increases\nwhen the distance from $R$ is great, there are relatively few vectors that fall within this regime. \n\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.3]{Figures\/convergenceNetflix.pdf}\n\\caption{Error as a function of the number of tests for a single Netflix user\nwho has rated 68 movies, for various $\\hat{d}$.}\n\\label{fig:convergence}\n\\end{figure}\n\nProviding another perspective, Figure \\ref{fig:convergence} shows the decrease in error (defined as the number of \ndifferences between the string and the state of the reconstructed string) as the number of \ntests increases, for a randomly selected Netflix user who has rated 68 movies. One can \nsee that using $\\hat{d}=202$ induces a slower rate of convergence than when using smaller \nsettings for $\\hat{d}$. The case where $\\hat{d}=d=1988$ is not shown since its rate of \nconvergence is even slower. \n\n\n\\begin{figure}\n\\centering\n\\quad\\includegraphics[scale=0.3]{Figures\/percentGenomic.pdf} \\qquad\n\\includegraphics[scale=0.3]{Figures\/percentNetflix.pdf} \\\\\\quad \\\\\n\\includegraphics[scale=0.3]{Figures\/percentEpinions.pdf} \\qquad\n\\includegraphics[scale=0.3]{Figures\/percentSlashdot.pdf} \\\\\\quad \\\\\n\\quad\\includegraphics[scale=0.3]{Figures\/percentFacebookUNC.pdf} \n\\caption{Percentage of strings cloned as a function of the number of tests, for each data set, using various $\\hat{d}$.}\n\\label{fig:percentage}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.3]{Figures\/percentSlashdotAll.pdf} \\qquad\n\\includegraphics[scale=0.29]{Figures\/percentFacebook.pdf}\n\\caption{Percentage of strings cloned as a function of the number of tests, for the large-scale\ndata sets. Slashdot-All has a large number of strings\n($g=82,144$) while Facebook-Uniform has large vector length ($n=72,261,577$). }\n\\label{fig:percentageLargeScale}\n\\end{figure}\n\n\nIn Figure \\ref{fig:percentage}, the percentage of the database cloned by the\nnonadaptive Mastermind attack is plotted as a function of the number of tests, \nfor various $\\hat{d}$. We highlight some examples which demonstrate the \nefficiency of this attack. For the Genomic data (using \n$\\hat{d} = d_{\\text{median}} = 18$), 78\\% of the database is successfully recovered \nafter 2,000 tests, and over 99\\% of the database is recovered after 3,000 tests, which\nis significantly less than both the baseline result and the theoretical bound in \nTable~\\ref{tbl:theoretical}.\nFor the Netflix data (using $\\hat{d} = d_{\\text{median}} = 92$), 63\\% of the strings \nare recovered after 10,000 tests. For the Epinions data (using $\\hat{d} = d_{\\text{mean}} = 8$), \n68\\% of the strings are recovered after only 500 tests. For the Slashdot data \n(using $\\hat{d} = d_{\\text{mean}} = 13$) 82\\% of the strings are recovered after \nonly 1,000 tests.\n\nFor Facebook-UNC, we see that the Mastermind \nattack displays different behavior for different choices of $\\hat{d}$. When $\\hat{d}=3$, the attack is able to \nquickly recover (the sparsest) 15\\% of the data set after only 500 tests, but as the number of tests increases, \nthe rate of progress slows significantly. When $\\hat{d}=25$, 52\\% of the database has been successfully recovered \nafter 2000 tests. Thus, using only a couple thousand nonadaptive tests, we are able to \nclone the friend lists of half (9K out of 18K) of the Facebook users at the University of \nNorth Carolina. \n\nWe also performed a large-scale nonadaptive Mastermind attack on Slashdot-All with 82,144 users. \nFigure \\ref{fig:percentageLargeScale} shows that \n55\\% of the strings are recovered after 2,500 \ntests and that 81\\% of the strings are recovered after 4,000 tests, using $\\hat{d} = 50$.\nEven when using a $\\hat{d}$ which may be suboptimal, our empirical results suggest that it \nis possible to substantially outperform both the baseline method as well the theoretical \nbounds in Table \\ref{tbl:theoretical} in practice, as long as $\\hat{d}$ is chosen to be less than $d$.\n\nWe also ran the same experiment on Facebook-Uniform for $\\hat{d}=108$ (the median distance from $R$). Figure~\\ref{fig:percentageLargeScale} shows that over 70\\% of the data set can be reconstructed with 10,000 tests, despite the fact\nthat the vector length of this data set is huge ($n=2,261,577$). Since Facebook-Uniform contains an unbiased sample of \nusers, these users are representative of the global Facebook population. Furthermore, our theory states that\nthe number of required tests increases at a rate of at most $\\log(g)$ where $g$ is the number of Facebook users. In fact, the theoretical\nnumber of tests needed to guarantee that 50\\% of a 300-million user Facebook network is cloned is less\nthan 20,000 (assuming\n$d_{\\text{median}}=130$)\\footnote{According to \\url{http:\/\/www.facebook.com\/press\/info.php?statistics}, $d_{\\text{mean}}=130$, and so \n$d_{\\text{median}}$ should be even smaller, suggesting that the Mastermind attack can be even more efficient.}. These results imply that an attacker may be able to recover \nover half of the global Facebook social network with several thousands of seemingly\ninnocuous nonadaptive Mastermind queries. \n\nIt is worth noting that experiments have also been conducted on a variety of other data sets not\nmentioned in this paper -- the nonadaptive Mastermind attack also performs \nvery well on those data sets. Results on cloning databases of binary attribute \nvectors (i.e. where the number of colors $c=2$) are described in previous work~\\cite{AsuncionGoodrich2010}.\n\nOur empirical results have shown that there is sensitivity to the choice of $\\hat{d}$ in certain cases. \nOne possible improvement \nis to use a tiered approach, where $\\hat{d_1}$ is used to construct the first 5000 tests, \n$\\hat{d_2}$ is used to construct the next 5000 tests, etc. Each $\\hat{d_i}$ could correspond \nto different mode. Nonetheless, even when using a single $\\hat{d}$, our results \ndemonstrate that it is possible to clone a large fraction of a sparse \ndatabase, by simply performing a nonadaptive Mastermind attack.\n\n\\section{Conclusion}\nWe have studied the Mastermind cloning attack, both from a\ntheoretical and experimental perspective, and have shown its\neffectiveness in being able to copy the contents of a string database\nthrough a sublinear number of string-comparison queries. Furthermore, our\napproach benefits from being fully nonadaptive and surreptitious in nature \n(due to the randomized\nquery construction), which is useful in real-world settings.\n\nA natural direction for future work, of course, is on methods for\ndefeating our nonadaptive Mastermind attack, which we have not addressed in this paper.\nCertainly, having Alice randomly permute the responses from her database \nwith each query could help, since it would make it harder (but not\nnecessarily impossible) for Bob to correlate responses between\ndifferent queries. Of course, requiring Alice to always randomly permute\nher responses would take extra time, and it may also require\nadditional space if she needs to store every response query so that\nusers can refer back to her responses for other, limited types of selection\nqueries she may allow.\nSo the technique of using random permutations can reduce the\nrisks associated with the Mastermind cloning attack, but it doesn't\nnecessarily eliminate these risks, and it comes with additional costs.\n\n\\subsection*{Acknowledgements}\nA shorter version of the material in this paper (which only dealt with binary \nattribute vectors) was presented by the authors at\nthe ACM Workshop on Privacy in the Electronic Society (WPES), Chicago, October 2010.\nWe would like to thank Pierre Baldi and Padhraic Smyth for respectively \nsuggesting the privacy of genomic data and Facebook relationships as research topics.\nWe are also grateful to Athina Markopoulou and Minas Gjokas for providing the Facebook-Uniform data.\nWe would also like to acknowledge David Eppstein and Daniel Hirschberg\nfor helpful discussions regarding the group-testing topics of this paper.\nThis research was supported by Office of Naval Research under MURI grant N00014-08-1-1015 and\nby the National Science Foundation under grants 0724806, 0713046, 0847968, and an NSF Graduate Fellowship. \n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\nHigh energy processes during the first evolutionary stages of star\nformation are responsible for both radio and X-ray emission\n(\\citealt{feigelson99}). Low-mass pre-main sequence (PMS) stars are\nwell known strong X-ray emitters. Their enhanced magnetic activity\nwith respect to more evolved stars produces violent reconnection\nevents in the corona of the stars, where the plasma heated to high\ntemperatures strongly emits variable X-ray emission. Massive stars\nalso emit X-ray radiation, usually related to wind shocks.\n\nOur understanding of the X-ray emission from young stars has\ndramatically increased in the recent years due to {\\it Chandra} and\n{\\it XMM-Newton} (\\citealt{getman08}, \\citealt{arzner07}). X-ray\nobservations have revealed thousands of PMS stars in tens of stellar\nclusters, resulting in good constraints on their X-ray properties\nsuch as plasma temperatures, levels of variability, luminosities\nand X-ray flare rate (see, e.g., \\citealt{wolk05}).\n\nIn contrast, the physics associated with the radio events (nature\nand origin of the emission, variability, timescales, flaring rate)\nare still poorly constrained. \\citet{drake89} proposed that radio\nflares might be produced by the same coronal activity that is responsible for bright X-ray emission (see review by \\citealt{gudel02}).\nIt would be expected then that electrons spiraling in the magnetic\nfield of the corona produce non-thermal and highly variable\ngyrosynchrotron radiation. Moreover, ionized material in the vicinity\nof stars, in circumstellar disks or envelopes or at the base of\nbipolar outflows, also produce thermal free-free ({\\it bremsstrahlung})\nradiation.\n\nLong-term radio variability on timescales of months to\nyears has been observed in star-forming regions\n(\\citealt{felli93,zapata04a,forbrich07,choi08}). However, it is still not clear\nwhether these variations are caused by long-term mechanisms, or\nthey are indeed produced by a sequence of events occurring\non shorter timescales. Systematic observations looking for short-term\nvariability are required to answer this question.\n\nRecently, \\citet{liu14} detected radio variability on hour timescales in the young stellar cluster R Coronae Australis.\nThe most powerful radio flares\\footnote{\\hspace{1mm} In this work we will use the term {\\em flare} to refer to flux\ndensity variations of a factor of $>$2 on timescales from hours\nto days.} have been serendipitously reported so far toward the Orion\nNebula Cluster (ONC) and the background Orion Molecular Cloud (OMC).\n\\citet{bower03} reported a strong radio flare at 86 GHz arising\nfrom a PMS star in the ONC, and \\citet{forbrich08} presented an\neven stronger radio flare at 22 GHz, originating from a young star\ndeeply embedded in the OMC previously detected through its X-ray\nemission. The low number of observed events could indicate that\nradio flares are a rare phenomenon (\\citealt{andre96}), but at the\nsame time prevents a proper statistical analysis of short-term\nvariability phenomena.\n\nSince the typical timescales of radio variability are poorly known,\nwe have carried out a monitoring program comprising various cadences\nranging from 3 hours to several months. The new capabilities of the\nKarl G. Jansky VLA now allows the scheduling of multiple, short\nsnapshots with good sensitivity in a reasonable amount of observing\ntime. The only two examples of powerful radio flares are\nlocated in Orion, and this region also harbors a rich cluster\nof low-mass stars (\\citealt{rivilla13a}); hence, this region is an excellent\ntarget for the detection of many sources in a single pointing. \n\nWe have carried out a multi-epoch radio continuum monitoring of the ONC\/OMC region using the Karl G. Jansky Very Large Array (VLA). This is the first radio monitoring at high centimeter frequencies in Orion, with 3 epochs at Q-band and 11 epochs at Ka-band.\nOur data allow us to study for the first time both the short\n(hours to days) and long (months) timescale variability of the radio\nsources in Orion.\n\nThe paper is laid out as follows. In Section \\ref{observations} we\npresent the details of the observations. In Section \\ref{results}\nwe show the results of the monitoring at Ka and Q bands. In Section \\ref{full-sample}\nwe compare our results with a previous monitoring at lower frequency (8.3 GHz). We also compile the full sample of radio\/X-ray sources in the ONC\/OMC region, and then compare the radio and X-ray properties with the aim of better understanding the link between radio and X-ray emission. In Section\n\\ref{discussionE} we discuss in more detail the new radio flaring\nsource detected by our monitoring, OHC-E. In Section \\ref{source-12}\nwe analyze in particular the radio variability observed in the binary\nsystem $\\theta^{1}$ {\\it Ori A}. In Section \\ref{flaring-rate} we estimate the rate of radio\nflaring activity of young stars in the Orion Hot Core (OHC). Finally, in Section \\ref{conclusions} we summarize\nthe conclusions of our work.\n\n\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[angle=0,width=17cm]{referee-fig-19radio.eps} \n\\caption{Images of the 19 radio sources detected in our radio\nmonitoring. The name of the source is indicated in the top left\ncorner of each panel. In the lower right corners we indicate the\nepoch of the observation (we have selected the epochs with the best\nspatial resolution and good detections, and in some cases we show\nthe detections in two different epochs with different colors). The\nfirst contour level is 3$\\sigma$ and the step between successive\ncontours is 2$\\sigma$, with the exception of BN, I, 25 and 12 (which\nhave steps of 10$\\sigma$), and the 2009 March 19 epoch for OHC-E\n(steps of 5$\\sigma$). The open blue squares indicate the position\nof known radio sources detected by \\citet{zapata04a} at 8.3 GHz,\nand the red circles denote the position of X-ray stars detected by\nChandra (COUP Project). The background greyscale image is the ACS\/WFC\nHST image at R-band, from the Hubble Legacy Archive. In the lower\nright panel we show the synthesized beams of the different images\nused.}\n\\label{fig-2}\n\\end{figure*}\n\n\\begin{figure}\n\\hskip8mm\n\\includegraphics[angle=0,width=7cm]{fig3-up.eps}\n\\vskip2.5mm\n\\hskip15.5mm\n\\includegraphics[angle=0,width=6.6cm]{fig3-down.eps}\n\\caption{Q-band light curves for those sources that fall within the\nQ-band primary beam. In the case of non-detections, 3$\\sigma$ upper\nlimits are indicated with triangles. The flux densities have been\nnormalized by the average value ($F_{\\rm av}$) calculated with the\npositive detections (or the average value of the upper limits if\nthe source is not detected). The error bars indicate the flux density\nuncertainties. The red horizontal line indicates the value at which\nthe flux density is equal to the average. The first two observations\nwere carried out at 45.6 GHz (black filled symbols), while the last\none was carried out at 43.3 GHz (black open symbols). For source\nC, the blue open triangles indicate upper limits for extended\nemission derived from smoothed images (see Section\n\\ref{variability-months}).}\n\\label{fig-Q-variability}\n\\end{figure}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[angle=0,width=18cm]{fig4-up.eps}\n\\vskip3mm\n\\hskip11.4mm\n\\includegraphics[angle=0,width=16.75cm]{fig4-down.eps}\n\\caption{Ka-band light curves of all the 19 sources detected\nthroughout the Ka-band monitoring. In the case of non-detections,\n3$\\sigma$ upper limits are indicated with triangles. The flux\ndensities have been normalized by the average value ($F_{\\rm av}$)\ncalculated with the positive detections (or the average value of\nthe upper limits if the source is not detected). The error bars\nindicate the flux density uncertainties. The red horizontal line\nindicates the value at which the flux density is equal to the average\nflux density. The dotted line merely joins the flux densities at\ndifferent epochs, and is not indicative of the evolution of the\nflux density between epochs. The frequency of the observations\nplotted is 33.6 GHz (black symbols), with the exception of the 2011\nJune 4 observation, which corresponds to a frequency of 30.5 GHz\n(indicated with open symbols). The second and third epoch on 2010\nNovember 23 are only separated by $\\sim$3 hours. We discuss this\nshort-term variability in detail in Section \\ref{short-variability}.\nFor source C, the blue open symbols indicate values for extended\nemission derived from smoothed images (see Section\n\\ref{variability-months}).}\n\\label{fig-Ka-variability}\n\\end{figure*}\n\n\\begin{figure}\n\\centering\n\\includegraphics[angle=0,width=8.0cm]{ratio-fluxes-BN-I.eps}\n\\caption{Ratio between the flux densities of sources I and BN versus\ntime at 33.6 GHz (filled circles) and at 30.5 GHz (open circle).}\n\\label{fig-ratio-BN-I}\n\\end{figure}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[angle=0,width=18.5cm]{fig-OHC-E.eps} \\caption{{\\em\nUpper panels:} VLA observations of the OHC region at 45.6 GHz from\n2009 March 9 (left) and 2009 March 19 (right). The source I is\ndetected in both images, while the new radio flaring source OHC-E\nappeared 3.3$\\arcsec$ toward the northeast on 2009 March 19. {\\em\nLower left panel:} Zoom-in view of the detection of the source OHC-E\nfrom the 2009 March 19 observation. The flux density scale is the\nsame as in upper panels. The contours indicate emission from 3$\\sigma$\nto 19$\\sigma$ in steps of 2$\\sigma$. {\\em Lower right panel:}\nSecond detection of OHC-E in the 2011 July 9 observation. The\ncontours indicate emission at 3$\\sigma$, 4$\\sigma$ and 5$\\sigma$.}\n\\label{fig-OHC-E}\n\\end{figure*}\n\n\\section{Observations and data reduction}\n\\label{observations}\n\nThe radio observations of the ONC\/OMC region were made with the VLA\nin the A, B, C and D configurations at Q-band (3 epochs) and Ka-band\n(11 epochs) from 2009 to 2011. Table \\ref{tableobservations}\nsummarizes the observational details for each of the epochs: array\nconfiguration, project ID, band name, frequency, observing bandwidth,\ndate, pointing center, observation length (including all calibrations),\nsynthesized beam, RMS noise at the center of the primary beam,\nprimary flux density calibrator used, and the flux density of the\ncomplex gain calibrator. \nThe separation between the epochs is different: the shortest one is only 3 hours, others are separated by several days, and some other by several months. This will allow to trace radio variability at different timescales.\nThe first two observations at Q-band (45.6 GHz)\nwere made in spectral line mode using the old VLA correlator, single\npolarization. \nThe third Q-band observation (45.6 GHz, AR712) used the standard\ndual polarization continuum set-up with the old correlator (100 MHz\ntotal bandwidth). For the data taken under observing code 10B-175\nthe new WIDAR correlator was used, with two 128 MHz sub-bands centered on 33.56 GHz placed\ncontiguously in frequency for all observations except those of 2011\nJun 4, for which the sub-bands were separated by 7 GHz as noted in\nTable \\ref{tableobservations}. The observations at Q-band were\nreduced using the Astronomical Image Processing System (AIPS)\npackage. The observations at Ka-band were reduced using the VLA\nCalibration\nPipeline\\footnote{https:\/\/science.nrao.edu\/facilities\/vla\/data-processing\/pipeline},\nwhich uses the Common Astronomy Software Applications (CASA)\npackage\\footnote{http:\/\/casa.nrao.edu}.\n\n\n\nPhase self-calibration was performed where possible. However, the\ntropospheric phase stability of the observations on 2011 Jun 4 was\npoor, and the data were of insufficient signal-to-noise ratio to\nenable phase self-calibration on short enough timescales to correct\nfor the resulting decorrelation. While the typical uncertainty in\nthe absolute flux density scale is 10\\% at these frequencies, the\nuncertainty in the absolute flux density scale for the 2011 Jun 4\ndata is increased to 20\\%. Images were made using CASA, applying\nan inner uv-cutoff ($>$50 k$\\lambda$) to filter the extended emission from the foreground\nHII region ionized by the Trapezium cluster, and natural weighting\nto maximize sensitivity. The images are corrected for the response\nof the primary beam before being used for photometry.\nThe $FITS$ files of the reduced images are available in the electronic version of the paper.\n\n\n\n\n\\section{Results: radio stellar population detected at high centimeter frequencies}\n\\label{results}\n\n\nThe large fields of view (FoVs) covered by the images made here,\nespecially in the VLA's B and A configurations, requires a rigorous criterion\nfor the detection of sources. Assuming a gaussian noise, and approximating the number of potential sources as the ratio between the area of the FoV and the solid angle of the beam ($\\pi\\theta_{\\rm minor}\\theta_{\\rm major}\/4Ln2$), we would expect $<$ 0.3 sources above 5 times the local RMS noise (5$\\sigma$) in the worst case. Therefore, we only report detections with flux densities $>$ 5$\\sigma$.\n\nWe detected a total of 19 sources (Table \\ref{table-our-sources}),\n18 of which have been previously detected by the radio monitorings\nat lower frequencies (\\citealt{felli93} at 5 and 15 GHz and\n\\citealt{zapata04a} at 8.3 GHz). Our observations have revealed the\npresence of a new radio source, hereafter OHC-E, detected in the\nOHC region in two different epochs. \nIn Fig.\\ \\ref{fig-our-sources}\nwe show the positions of all the detected sources, overplotted on\nthe R-band Advanced Camera for Surveys\/Wide Field Channel (ACS\/WFC)\nHubble Space Telescope (HST) image and on the infrared K-band image\nfrom the 2 Micron All Sky Survey (2MASS). The sources are mainly\nconcentrated in the OHC region and the Trapezium Cluster, which harbors the two highest stellar densities within our FoV (\\citealt{rivilla13a}).\n\nTo measure the flux densities of the sources, we use the AIPS task\nJMFIT\\footnote{JMFIT fits a 2 dimensional gaussian models to the sources by least-squares.}. We also add an absolute uncertainty of 10$\\%$ in quadrature\n(20$\\%$ for the 2011 Jun 4 observations due to the poorer phase\nstability of those data; see Section \\ref{observations}). Table\n\\ref{table-fluxes} summarizes the results. Only the sources BN and\nsource I are detected in all epochs, showing nearly constant flux\ndensities, and verifying the reproduceability of the flux density\nscale. Source C is consistent with a constant flux density, although\nis not detected in all epochs.\nThe other sources exhibit clear flux density variation between\nepochs. In the case of non-detections, we quote 3$\\sigma$ upper\nlimits. \n\n\n\\begin{table}\n\\caption{Positions of the radio sources detected in our Ka-band and\nQ-band monitoring.}\n\\label{table-our-sources}\n\\tabcolsep 6.pt\n\\centering\n\\begin{tabular}{c c c} \n\\hline\\hline\nSource & $RA_{J2000}$ & $DEC_{J2000}$ \\\\\n& 5 35 & -5 \\\\\n\\hline\\hline\nBN & 14.11 & 22 22.69 \\\\\nI & 14.51 & 22 30.58 \\\\\nOHC-E & 14.73 & 22 29.83 \\\\\nD & 14.90 & 22 25.38 \\\\\nn$^{a}$& 14.35 & 22 32.89 \\\\\nH & 14.50 & 22 38.76 \\\\\nA & 11.80 & 21 49.29 \\\\\nC & 14.16 & 23 01.04 \\\\\nF & 18.37 & 22 37.43 \\\\\nG & 17.95 & 22 45.42 \\\\\nE & 16.96 & 22 48.78 \\\\\n15 & 16.07 & 23 07.12 \\\\\n6 & 16.75 & 23 16.44 \\\\\n7 & 16.28 & 23 16.58 \\\\\n25 & 15.77 & 23 09.86 \\\\\n12 & 15.82 & 23 14.00 \\\\\n11 & 15.84 & 23 22.40 \\\\\n16 & 16.33 & 23 22.54 \\\\\n5 & 16.85 & 23 26.31 \\\\\n\\hline\n\\end{tabular}\n\\begin{list}{}{}\n\\item[$^{\\mathrm{a}}$]{This source was called $L$ by \\citet{garay87},\nbut its more common name is n (\\citealt{menten95}).}\n\\end{list}\n\\end{table}\n\n\n\n\\subsection{Long-term variability: month timescales}\n\\label{variability-months}\n \n\nIn this section we study the behavior of the radio emission throughout the full monitoring. In Figs. \\ref{fig-Q-variability} and\n\\ref{fig-Ka-variability} we show the measured integrated flux\ndensities of all sources during the different epochs of our monitoring\nfor the Q-band and Ka-band, respectively. We note that in the\nfigures the observation on JD 2455188 (2009 Dec 22) at Q-band\ncorresponds to a slightly different frequency (43.3 GHz) from the\nfirst two epochs (45.6 GHz). Also, the flux density from the\nobservation on JD 2455717 (2011 June 4) at Ka-band (shown in Fig.\\\n\\ref{fig-Ka-variability}) corresponds to 30.5 GHz, and not to 33.6\nGHz as for the others.\n\nWith the aim of quantify the radio variability we study two parameters: i) the standard deviation $\\Delta F$\nof the flux densities from the average flux $F_{\\rm av}$, which measure the absolute variability; and ii) $\\beta$, which is defined as $\\beta=\\Delta\nF\/F_{\\rm av}$, following \\citet{felli93}, which measures the relative variability.\n Given that some sources remain undetected in some epochs, and hence have flux densities below the sensitivity limit, we consider in the calculation of $\\Delta F$ (and hence of $\\beta$) the positive detections as well as the lowest upper limit. In Table \\ref{table-big} we show the values of $F_{\\rm av}$, $\\Delta F$ and $\\beta$ for the sources detected. \n\n\nThe sources BN and I are associated with massive stars\n(\\citealt{reid07}, \\citealt{goddi11}), and are expected to emit\nmainly thermal (and constant) emission arising from ionized gas\nsurrounding the central object. Our monitoring at Q-band and Ka-band\n(Figs. \\ref{fig-Q-variability} and \\ref{fig-Ka-variability}) shows\nindeed that the flux density of BN is nearly constant, with a very\nlow month-timescale radio variability parameter at 33.6 GHz of\n$\\beta$=0.06.\n\nIn the case of source I the radio variability is higher, $\\beta$=0.13.\n\\citet{zapata04a} also reported some variability towards this\nsource, with flux density variations of a factor $\\sim$2. Furthermore,\n\\citet{plambeck13}, using observations separated by 15 yr also found evidence of a gradual flux density increase from source I with respect the more steady flux\ndensity of BN (see their Fig. 5). From our Ka-band monitoring,\nwe have studied the month-timescale evolution of the ratio between\nthe flux densities of sources I and BN\\@. Fig.\\ \\ref{fig-ratio-BN-I}\nshows that this ratio exhibits variations larger than the statistical\nuncertainties (we do not include the uncertainty in the absolute\nflux density scale in calculating the error in these ratios),\nsuggesting that real variability is present. \n The origin of this confirmed long-term variability toward source I could be due to ionization of infalling accretion flows onto the massive star (\\citealt{galvan-madrid11,depree14}). \n\nSource C is only detected in the epochs for which the VLA was in its most compact configurations. This suggests that the emission from source C is extended, and that the higher resolution observations have filtered it out. To derive a proper sensitivity level for extended emission we have smoothed the B-configuration images at Q-band to the resolution of the D-configuration image\n(1.9$\\arcsec\\times$1.4$\\arcsec$). For the Ka-band monitoring, we have\nsmoothed the higher resolution images (from 2011 Feb 8 to 2011 Jul\n09) to a C-configuration resolution of 0.8$\\arcsec\\times$0.8$\\arcsec$.\nWe have inspected the smoothed images, and measured the flux density\n(or 3$\\sigma$ upper limit) at the location of source C. The\nresulting light curves (Figs. \\ref{fig-Q-variability} and\n\\ref{fig-Ka-variability}) are in agreement with nearly constant emission during the monitoring.\nThis radio source is associated with one of the proplyds\\footnote{{\\it Proplyds} are objects for which circumstellar material is being ionized by the ultraviolet radiation from massive stars.} revealed by the HST (see Fig.\\ \\ref{fig-2}). Using the flux densities detected at 33.6\nGHz and 43.3 GHz, we obtain a spectral index ($F\\sim\\nu^{\\alpha}$)\nof $\\alpha\\sim$3, consistent with optically-thin dust emission from\nthe proplyd. \n\nThe other sources detected show clear variations at Ka-band between\nepochs ($\\beta>$0.29). {Many of them are detected only in some epochs, remaining below the sensitivity limit at other epochs}. There is no clear trend or pattern in the\nvariability, which appears to be stochastic for many of the sources. \nThis highly variable emission suggests non-thermal processes. Indeed, \\citet{menten07} detected 4 of these sources (A, F, G and 12) with Very Long Baseline Array (VLBA) observations, confirming their compactness and hence the non-thermal nature of the emission.\n\nUsing epochs separated by $\\sim$1 month, it is not possible to determine whether flux density variation is smooth during this period, or whether it happens in shorter timescales. Observations with shorter separations are needed. We address this issue in Section \\ref{short-variability}.\n\n\\begin{table*}\n\\caption{Multi-epoch radio continuum flux densities$^{a}$. The\nvalues in parenthesis correspond to 3$\\sigma$ upper limits.}\n\\label{table-fluxes}\n\\hspace*{-0.75cm}\n\\tabcolsep 3.pt\n\\centering\n\\begin{scriptsize}\n\\begin{tabular}{c| c c c| c c c c c c c c c c c}\n & \\multicolumn{13}{c}{Epoch: JD - 2454000} \\\\\nSource & 900 & 910 & 1188 & 1494 & 1523.76 & 1523.89 & 1569 & 1601 & 1649 & 1709 & \\multicolumn{2}{c}{1717} & 1724 & 1752 \\\\ \\cline{2-15}\n & \\multicolumn{3}{c|}{Q-band (GHz)} & \\multicolumn{11}{c}{Ka-band (GHz)} \\\\\n & 45.6 & 45.6 & 43.3 & 33.6 & 33.6 & 33.6 & 33.6 & 33.6 & 33.6 & 33.6 & 37.5 & 30.5 & 33.6 & 33.6 \\\\ \\hline\nBN & 31.2$\\pm$3.3 & 30.1$\\pm$3.1 & 27.9$\\pm$3.0 & 26.4$\\pm$2.7 & 25.9$\\pm$2.7 & 27.2$\\pm$2.8 & 23.9$\\pm$2.5 & 24.6$\\pm$2.5 & 25.7$\\pm$2.6 & 27.8$\\pm$2.8 & 20.5$\\pm$4.1 & 18.8$\\pm$3.8 & 25.7$\\pm$2.6 & 22.7$\\pm$2.3 \\\\\nI & 14.1$\\pm$1.7 & 12.5$\\pm$1.5 & 13.7$\\pm$1.7 & 9.2$\\pm$1.1 & 9.1$\\pm$1.2 & 12.5$\\pm$1.5 & 10.7$\\pm$1.3 & 10.8$\\pm$1.2 & 9.7$\\pm$1.1 & 9.3$\\pm$1.0 & 9.1$\\pm$1.9 & 7.3$\\pm$1.5 & 10.8$\\pm$1.2 & 8.1$\\pm$0.9 \\\\\nOHC-E & (1.4) & 7.9$\\pm$1.1 & (1.6) & (1.1) & (1.3) & (1.2) & (1.3) & (1.0) & (0.7) & (0.7) & (0.4) & (0.3) & (0.5) & 0.8$\\pm$0.3 \\\\\nD & (1.4) & (1.1) & (1.7) & (1.1) & (1.2) & (1.3) & 4.6$\\pm$0.8 & (1.0) & (0.7) & (0.7) & (0.4) & (0.3) & (0.5) & (0.4) \\\\\nn & (1.4) & (1.1) & (1.6) & 4.0$\\pm$0.8 & (1.2) & 3.5$\\pm$0.9 & 3.5$\\pm$0.9 & (1.0) & (0.7) & (0.7) & (0.4) & (0.3) & (0.5) & (0.4) \\\\\nH & (1.4) & (1.2) & (1.7) & (1.1) & (1.2) & (1.4) & (1.2) & (1.0) & 1.4$\\pm$ 0.5 & (0.7) & (0.4) & (0.3) & 0.9$\\pm$0.3 & 0.9$\\pm$0.3 \\\\\nA & - & - & - & (5.1) & 9.6$\\pm$3.8 & (6.2) & (5.6) & 7.7$\\pm$3.6 & 9.9$\\pm$2.1 & (3.2) & - & 9.9$\\pm$2.4 & (2.2) & (2.0) \\\\\nC & (35) & (33) & 27.8$\\pm$3.7 & 12.7$\\pm$1.7 & 11.9$\\pm$1.9 &\n11.3$\\pm$1.9 & 12.0$\\pm$1.8 & 19.5$\\pm$7.2 & 20.4$\\pm$7.4 & 15.8$\\pm$4.0 & (20.8) & (16.0) & (18.0) & (17.4) \\\\\nF & - & - & - & (5.4) & 19.5$\\pm$4.1 & 57.9$\\pm$7.2 & 34.6$\\pm$5.5 & 40.1$\\pm$5.4 & 6.2$\\pm$2.6 & 8.6$\\pm$2.5 & - & 5.5$\\pm$1.5 & 20.0$\\pm$3.6 & 9.9$\\pm$2.6 \\\\\nG & - & - & - & (4.1) & (4.6) & (5.2) & (4.5) & (3.7) & (2.5) & (2.5) & (2.2) & 5.7$\\pm$1.4 & 15.5$\\pm$2.7 & (1.6) \\\\\nE & (7.1) & (5.3) & (6.7) & (2.3) & (2.5) & (2.8) & 7.1$\\pm$2.4 & 11.8$\\pm$3.1 & (1.4) & (1.4) & (1.0) & (0.5) & (1.0) & (0.9) \\\\\n15 & (8.5) & (7.1) & (8.0) & (2.6) & 5.9$\\pm$1.8 & 5.6$\\pm$2.6 & (2.8) & (2.3) & 3.0$\\pm$1.3 & (1.6) & (1.2) & (0.6) & (1.1) & (1.0) \\\\\n6 & - & - & - & 20.9$\\pm$3.9 & 24.5$\\pm$4.6 & 21.7$\\pm$4.1 & 33.2$\\pm$4.9 & 34.1$\\pm$5.6 & 7.0$\\pm$2.4 & (3.2) & - & 16.0$\\pm$ 3.6 & 37.9$\\pm$6.2 & (2.0) \\\\\n7 & - & - & - & (4.1) & 8.0$\\pm$3.1 & 23.8$\\pm$4.3 & 31.4$\\pm$4.7 & 20.7$\\pm$3.8 & 7.1$\\pm$2.2 & (2.6) & 11.9$\\pm$3.5 & 21.5$\\pm$4.7 & 14.8$\\pm$ 3.6 & (1.6) \\\\\n25 & (8.9) & (7.3) & (8.2) & (2.6) & (2.9) & (3.1) & 8.8$\\pm$2.0 & (2.3) & 6.3$\\pm$1.3 & (1.6) & (1.2) & (0.6) & 5.4$\\pm$1.2 & 19.2$\\pm$2.1 \\\\\n12 & - & - & - & (3.1) & (3.5) & 8.5$\\pm$2.9 & 11.2$\\pm$2.6 & 20.4$\\pm$2.7 & (1.9) & 38.3$\\pm$4.2 & 18.4$\\pm$4.3 & 13.4$\\pm$2.7 & 69.9 $\\pm$7.1 & 2.8$\\pm$0.9 \\\\\n11 & - & - & - & (4.8) & (5.3) & (5.9) & 22.8$\\pm$4.7 & (4.2) & 12.6$\\pm$3.7 & (2.9) & - & 27.9$\\pm$ 6.5 & 11.9$\\pm$2.5 & (1.8) \\\\\n16 & - & - & - & (5.9) & (6.5) & (7.2) & (6.4) & (5.2) & (3.6) & (3.6) & - & (1.1) & 4.7$\\pm$1.6 & (2.3) \\\\\n5$^{b}$ & - & - & - & - & - & - & - & - & - & - & - & 30.9$\\pm$7.5 & - & - \\\\\n\\hline\n\\end{tabular}\n\\begin{list}{}{}\n\\item[$^a$]{Integrated flux densities are presented in mJy.}\n\\item[$^b$]{Source 5 is outside the FoV for all but the 2011 Jun 4 at 30.5 GHz, whose FoV is larger.}\n\\end{list}\n\\end{scriptsize}\n\\end{table*}\n\n\n\\subsection{Short-term variability}\n\\label{short-variability}\n\n\\subsubsection{Day timescales}\n\\label{variability-days}\n\nThe first two Q-band observations are only separated by 10 days\n(2009 March 9 and 19). The main result obtained by comparing these\nobservations is the discovery of a new radio source (hereafter OHC-E). Fig.\\ \\ref{fig-OHC-E} shows the\ncomparison of the two images. The source is detected at 3.3$\\arcsec$\nnortheast of source I. The flux density variation between the two\nepochs is $>$5.6 (using the 3$\\sigma$ upper limit as the flux density\nin the 2009 March 9 image). Source OHC-E shows a constant flux\ndensity during the 4 hours of the 2009 March 19 observation,\nindicating that we might have detected a fraction of a powerful flare event.\nThis is consistent with the duration reported on other radio flares observed in Orion (source A and ORBS), which have timescales of hours to days.\n\n\\subsubsection{Hour timescales}\n\\label{variability-hours}\n\nTwo of the Ka-band observations are separated by only $\\sim$3 hours (2010 Nov 23), enabling the study of flux density variations on even shorter timescales. Fig. \\ref{fig-hours} shows the flux density variation of the 10 sources detected in these two epochs, normalized by the flux density of BN. In Table \\ref{table-hours} we show the ratio between the flux densities of the two epochs ($F_{\\rm\n523.89}\/F_{\\rm 523.76}$, where the subscript is the JD date minus 2454000, and the variability parameter defined between these two epochs is $\\beta$. The sources BN, C, 15, 6 are nearly constant within the flux density uncertainties, with variations within a factor 0.95$-$1.17 and $\\beta<$ 0.11. The latter 2 sources, however, exhibit clear variations at other epochs (see Fig. \\ref{fig-Ka-variability}). \n\n\n\\begin{figure}\n\\centering\n\\includegraphics[angle=0,width=8.8cm]{hours-variability-ratioBN.eps} \n\\caption{Flux density variation (normalized by the flux density of\nBN) on hour timescales for sources detected in the two runs observed\non 2010 November 23.}\n\\label{fig-hours}\n\\end{figure}\n\nThe sources n, 7, F and 12 show clear radio flares (see Fig. \\ref{fig-hours}), with flux density increases by factors $\\geq$2.4 and variability parameters $\\beta\\geq$0.59 (Table \\ref{table-hours}). The source A, a well known radio flaring source (\\citealt{bower03,zapata04a,gomez08}), suffers a significant decrease of its flux density, being undetectable in the second image.\n\nThere is a tentative detection of variability in source I between the two epochs. The flux densities measured are not within the flux density uncertainty limits (see Fig.\\ \\ref{fig-hours} and Table\n\\ref{table-fluxes}), the variability parameter is $\\beta$=0.22, and the variation factor is 1.37 (while\nin the case of the BN source is only 1.05). We conclude that this\nshort-term variation of source I is not due to calibration\nuncertainties, and might be real. \nAs already commented in Sect. \\ref{variability-months}, ionized gas of infalling accretion flows onto this massive star (\\citealt{galvan-madrid11,depree14}) might be responsible of the long-term variability observed in source I (Fig. \\ref{fig-ratio-BN-I}). However, other mechanism(s) would be needed to explain the variability observed in hours-timescales, like stochastic shocks in the radiative wind of the massive star (\\citealt{stelzer05}), or the presence of\nan unseen low-mass companion emitting non-thermal gyrosynchrotron radiation.\nIndeed, \\citet{goddi11} present arguments for source I harboring a\nbinary system based on the kinematic history of the region. The\ninteraction of a companion with the wind from a massive primary\ncould explain the variability observed here.\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[angle=0,width=6.0cm]{spectral-index-30-37.eps}\n\\caption{Flux densities at 30.5 and 37.5 GHz from the 2011 June 4 observations.}\n\\label{fig-spectral-indeces}\n\\end{figure}\n\n\n\n\n\\subsection{Spectral indeces between 30.5 and 37.5 GHz}\n\nOne of the methods commonly used to unveil the nature of the radio emission is by studying the emission as a function of frequency, $F \\propto \\nu^{\\alpha}$, where $\\alpha$ is the spectral index. \n\nIn the 2011 June 4 observations, we have observed simultaneously at two different frequencies, 30.5 and 37.5 GHz. The calibration uncertainty in this epoch is higher than in the other runs (see Section \\ref{observations}), which implies large uncertainties in the derivation of $\\alpha$. However, it is worth mentioning that source 7 and source G exhibit clear flux decreases with $\\alpha<$-0.4, suggesting that the dominant emission mechanism seems to be non-thermal. \n \n\n\\begin{table*}\n\\caption{ List of radio sources in the ONC\/OMC region considered our analysis}\n\\label{table-big}\n\\tabcolsep 1.0pt\n\\centering\n\\vspace{-0.1cm}\n\\hspace{-0.7cm}\n\\begin{tabular}{c c c c c c c c| c c c c c c c c| c c c| c| c} \n\\hline\\hline\n \\multicolumn{8}{c|}{Radio sources} & \\multicolumn{8}{c|}{X-ray sources} & \\multicolumn{3}{c|}{Optical$^b$} & IR$^c$ & Mem.$^d$ \\\\ \\cline{1-8} \\cline{9-16}\n \\multicolumn{2}{c|}{ID} & \\multicolumn{3}{c|}{33.6 GHz} & \\multicolumn{3}{c|}{8.3 GHz} & COUP & \\multicolumn{7}{c|}{Properties} & & & & \\\\ \\cline{1-2} \\cline{3-5} \\cline{6-8} \\cline{10-16}\nSource & Z04$^a$ & $F_{\\rm av}$ & $\\Delta F$ & $\\beta$ & $F_{\\rm av}$ & $\\Delta F$ & $\\beta$ & ID & Med$E$ & $HR_{1}$& log$N_{\\rm H}$ & log$P_{\\rm KS}$ & BB & X-ray & log$L_{X}$ & OW94 & H97 & K05 & HC00 & \\\\\n & & (mJy) & (mJy) & & (mJy) & (mJy) & & & (keV) & & (cm$^{-2}$) & & Num & flare?$^e$ & (erg s$^{-1}$) & & & \\tiny({propl.)} & & \\\\\n\\hline\\hline\n I & 19 & 10.0 & 1.3 & 0.13 & 0.64 & 0.23 & 0.36 &-$^f$ & - & - & - & - & - & - & - & - & - & - & - & OMC \\\\\n C$^{g}$ & 14 & 14.8&3.7 & 0.25 & 4.97 & 1.61 & 0.32 & - & - & - & - & - & - & - & - & - & y & y & y & ONC \\\\\n 16 & 53 & 4.7&1.7 & 0.36 & 3.14 & 0.43 & 0.14 & - & - & - & - & - & - & - & - & y & - & - & - & ONC \\\\\n 5 & 61 & - & - & - & 15.88 & 0.26 & 0.02 & - & - & - & - & - & - & - & - & y & y & y & y & ONC \\\\\n\\hline\n BN & 12 & 25.5 & 1.6 & 0.06 & 3.61 & 0.25 & 0.07 & 599b$^h$& - & - & - & - & - & - & - & - & - & - & y & OMC \\\\\nOHC-E & - & 0.8 & 0.2 & 0.27 & - & -& - & 655 & 3.58 & 0.89 & 22.85$\\pm$0.01 & -4.00 & 9 & y & 31.37 & - & - & - & y & OMC \\\\\n D & 21 & 4.6&3.0 & 0.65 & 1.04 & 0.68 & 0.65 & 662 & 4.50 & 0.98 & 23.22$\\pm$0.03 & -4.00 & 13 & y & 31.29 & - & - & - & - & OMC \\\\\n n & 17 & 3.7&1.7 & 0.45 & 1.12 & 0.37 & 0.33 & 621 & 3.57 & 0.84 & 22.74$\\pm$0.01 & -4.00 & 13 & y & 30.97 & - & - & - & y & OMC \\\\\n H & 18 & 1.1&0.3 & 0.28 & 0.66 & 0.50 & 0.75 & 639 & 3.99 & 0.96 & 23.06$\\pm$0.04 & -4.00 & 8 & y & 30.90 & - & - & - & - & OMC \\\\\n A & 6 & 9.1&3.7 & 0.40 & 16.84 & 26.7 & 1.58 & 450 & 3.32 & 0.64 & 22.34$\\pm$0.03 & -4.00 & 12 & y & 32.27 & - & - & - & y & ONC \\\\\n F & 76 & 24.6&18.1 & 0.74 & 15.04 & 5.10 & 0.34 & 965 & 1.31 &-0.63 & 21.25$\\pm$0.10 & -4.00 & 2 & n & 31.89 & -& y & - & y & ONC \\\\\n G & 73 & 15.5&9.8 & 0.63 & 2.48 & 2.09 & 0.84 & 932 & 1.29 &-0.64 & 21.17$\\pm$0.09 & -4.00 & 12 & y & 32.18 &- & y & - & y & ONC \\\\\n E & 63 & 9.5&5.5 & 0.58 & 2.07 & 0.47 & 0.23 & 844 & 1.63 &-0.50 & 22.01$\\pm$0.14 & -0.96 & 1 & y & 29.22 & - & y & y & y & ONC \\\\\n15 & 49 & 4.8&2.3 & 0.48 & 4.35 & 0.83 & 0.19 & 766 & 1.58 &-0.32 & 21.54$\\pm$0.03 & -4.00 & 20 & y & 30.71 & - & y & - & y & ONC \\\\\n 6 & 59 & 25.6&12.8 & 0.50 & 22.23 & 0.33 & 0.01 & 826 & 2.31 & 0.16 & 22.06$\\pm$0.04 & -4.00 & 11 & y & 30.35 & - & y & y & y & ONC \\\\\n 7 & 52 & 17.6&10.6 & 0.60 & 10.12 & 0.87 & 0.09 & 787 & 2.43 & 0.29 & 22.40$\\pm$0.04 & -4.00 & 5 & y & 30.22 & y & y & y & y & ONC \\\\\n25 & 38 & 9.9&6.6 & 0.67 & 4.96 & 0.61 & 0.12 & 732 & 1.33 &-0.60 & 20.99$\\pm$1.99 & -4.00 & 23 & y & 32.35 & - & y & - & y & ONC \\\\\n12 & 41 & 25.2&24.6 & 0.98 & 12.09 & 7.89 & 0.65 & 745 & 1.33 &-0.59 & 20.79$\\pm$0.08 & -4.00 & 20 & y & 32.33 & y & y & - & y & ONC \\\\\n11 & 42 & 12.6&8.5 & 0.68 & 10.82 & 0.17 & 0.02 & 746 & 2.23 & 0.19 & 22.10$\\pm$0.55 & -3.40 & 2 & n & 28.79 & y & y & y & y & ONC \\\\\n\\hline\nORBS & - & - & - & - & - & - & - & 647 & 5.20 & 1.00 & 23.51$\\pm$0.03 & -4.00 & 8 & y & 30.93 & - & - & - & - & OMC \\\\\n\\hline\n - & 1 & - & - & - & 2.84 & 0.12 & 0.04 & 229 & 4.05 & 0.98 & 22.96$\\pm$0.19 & -0.05 & 1 & n & 29.28 & - & - & - & - & EG \\\\\n - & 2 & - & - & - & 0.81 & 0.81 & 1.00 & 342 & 1.98 &-0.01 & 22.21$\\pm$ 0.01 & -4.00 & 31 & y & 31.32 & - & y & - & y & ONC \\\\\n - & 7 & - & - & - & 0.78 & 0.42 & 0.54 & 510 & 4.75 & 1.00 & 23.54$\\pm$ 0.06 & -2.59 & 3 & y & 31.26 & - & - & - & - & OMC \\\\\n - & 9 & - & - & - & 0.48 & 0.27 & 0.56 & 530 & 5.23 & 0.98 & 23.50$\\pm$0.03 & -4.00 & 2 &(y)& 30.75 & - & - & - & - & OMC \\\\\n - & 16 & - & - & - & 0.33 & 0.18 & 0.56 & 625 & 4.81 & 0.98 & 23.42$\\pm$0.05 & -4.00 & 4 & y & 30.95 &- & - & - & - & OMC \\\\\n - & 31 & - & - & - & 0.20 & 0.08 & 0.42 & 699 & 1.51 &-0.50 & 21.61$\\pm$0.11 & -4.00 & 4 & y & 29.54 & - & y & y & y & ONC \\\\\n - & 33 & - & - &- & 2.90 & 1.79 & 0.62 & 708 & 1.54 &-0.36 & 21.46$\\pm$0.05 & -4.00 & 10 & y & 30.19 & - & y & - & y & ONC \\\\\n - & 34 & - & - & - & 5.66 & 0.59 & 0.10 & 717 & 1.55 &-0.54 & 20.98$\\pm$0.70 & -0.44 & 1 & (y) & 28.53 & y & y & y & y & ONC \\\\\n - & 37 & - & - & - & 1.44 & 0.04 & 0.03 & 733 & 2.49 & 0.19 & 22.54$\\pm$0.23 & -2.33 & 2 & n & 29.40 & y & - & y & y & ONC \\\\\n - & 43 & - & - & - & 4.05 & 0.43 & 0.11 & 747 & 3.47 & 0.33 & 22.25$\\pm$0.35 & -1.06 & 1 & y & 28.88 & y & y & y & y & ONC \\\\\n - & 44 & - & - & - & 1.33 & 0.48 & 0.36 & 757 & 1.89 &-0.17 & 22.45$\\pm$0.21 & -0.64 & 1 & (y) & 29.01 & y & - & y & y & ONC \\\\\n - & 45 & - & - & - & 6.03 & 0.58 & 0.10 & 758 & 1.66 &-0.29 & 21.73$\\pm$0.02 & -4.00 & 29 & y & 31.05 & y & y & y & y & ONC \\\\\n- & 46 & - & - & - & 1.70 & 0.84 & 0.49 & 768 & 1.63 &-0.33 & 21.52$\\pm$0.04 & -4.00 & 4 & y & 30.27 & y & y & y & y & ONC \\\\\n- & 54 & - & - & - & 1.02 & 0.61 & 0.60 & 800 & 1.93 &-0.14 & 22.36$\\pm$0.21 & -0.85 & 2 & y & 28.86 & - & y & y & y & ONC \\\\\n- & 56 & - & - & - & 0.65 & 0.45 & 0.69 & 807 & 1.37 &-0.60 & 21.43$\\pm$0.10 & -4.00 & 7 & y & 29.73 & - & y & y & y & ONC \\\\\n- & 58 & - & - & - & 1.58 & 0.16 & 0.10 & 820 & 3.64 & 0.95 & 22.86$\\pm$0.13 & -4.00 & 3 & y & 30.07 & y & - & y & y & ONC \\\\\n- & 60 & - & - & - & 3.12 & 0.41 & 0.13 & 827 & 1.63 &-0.41 & 22.24$\\pm$0.03 & -4.00 & 2 & n & 30.46 & y & y & y & y & ONC \\\\\n- & 64 & - & - &- & 7.65 & 0.45 & 0.06 & 847 & 1.35 &-0.64 & 21.62$\\pm$0.35 & -2.00 & 1 & (y) & 29.22 & y & y & y & y & ONC \\\\\n- & 65 & - & - & - & 3.82 & 0.18 & 0.05 & 855 & 1.96 &-0.01 & 21.69$\\pm$0.09 & -4.00 & 8 & y & 31.79 & y & y & y & y & ONC \\\\\n- & 69 & - & - & - & 2.03 & 0.38 & 0.19 & 876 & 3.22 & 0.70 & 22.69$\\pm$0.06 & -2.60 & 3 & y & 29.81 & - & y & y & y & ONC \\\\\n- & 71 & - & - & - & 4.05 & 0.39 & 0.10 & 900 & 4.04 & 0.46 & 23.54$\\pm$0.46 & -4.00 & 2 & y & 30.44 & - & y & y & y & ONC \\\\\n- & 75 & - & - & - & 0.43 & 0.14 & 0.33 & 955 & 1.70 &-0.31 & 21.46$\\pm$0.19 & -0.36 & 2 & y & 28.78 & - & y & y & y & ONC \\\\\n- & 77 & - & - &- & 2.81 & 1.71 & 0.61 & 1130 & 1.23 &-0.71 & 21.28$\\pm$0.14 & -4.00 & 7 & (y) & 31.67 & - & y & - & y & ONC \\\\\n\\hline\n - & 3 & - & - &- & 0.16 & - & - & 378 & 1.23 &-0.66 & 21.11$\\pm$0.07 & -4.00 & 5 & y & 30.20 & - & y & - & y & ONC \\\\\n - & 4 & - & - &- & 0.30 & - & - & 394 & 1.20 &-0.72 & 20.00$\\pm$1.65 & -4.00 & 6 & y & 31.32 & - & y & - & y & ONC \\\\\n - & 5 & - & - &- &0.24 & - & - & 443 & 2.42 & 0.35 & 21.76$\\pm$0.39 & -0.51 & 1 & n & 28.25 & - & y & y & y & ONC \\\\\n - & 8 & - & - &- & 0.11 & - & - & 524 & 3.31 & 0.52 & 22.37$\\pm$0.31 & -4.00 & 2 & y & 28.51 & - & y & y & y & ONC \\\\\n - & 13 & - & - &- & 0.32 & - & - & 607 & 4.37 & 0.22 & - & -0.05 & 1 & n & - & - & - & - & - & OMC$^{h}$ \\\\\n - & 20 & - & - &- & 0.29 & - & - & 658 & 2.37 & 0.23 & 22.23$\\pm$2.62 & -4.00 & 9 & y & 30.41 & y & y & y & y & ONC \\\\\n - & 23 & - & - & - & 0.32 & - & - & 671 & 1.50 &-0.52 & 21.83$\\pm$0.14 & -2.20 & 3 & y & 29.21 & y & y & - & y & ONC \\\\\n - & 28 & - & - & - & 0.34 & - & - & 690 & 3.72 & 0.82 & 22.70$\\pm$0.15 & -4.00 & 2 & y & 29.31 & y & - & y & - & ONC \\\\\n - & 29 & - & - & - & 0.24 & - & - & 689 & 1.32 &-0.61 & 20.85$\\pm$0.15 & -4.00 & 22 & y & 31.82 & - & y & - & y & ONC \\\\\n - & 51 & - & - & - & 0.30 & - & - & 780 & 3.60 & 0.92 & 22.83$\\pm$0.03 & -4.00 & 6 & y & 30.95 & - & - & - & y & ONC$^i$ \\\\\n - & 66 & - & - & - &0.32 & - & - & 856 & 1.47 &-0.47 & 21.65$\\pm$0.02 & -4.00 & 9 & y & 30.34 & y & y & y & y & ONC \\\\\n - & 70 & - & -& - & 0.32 & - & - & 885 & 1.44 &-0.50 & 21.61$\\pm$0.02 & -4.00 & 7 & y & 30.33 & - & y & - & y & ONC \\\\\n\\hline \\hline\n\\end{tabular}\n\\begin{list}{}{}\n\\begin{tiny}\n\\item[$^a$]{Source ID from \\citet{zapata04a}.}\n\\item[$^b$]{Optical association from the work of H97\n(\\citealt{hillenbrand97}) and K05 (\\citealt{kastner05}).}\n\\item[$^c$]{Infrared association from the work of HC00\n(\\citealt{hillenbrand00}).}\n\\item[$^d$]{Source membership: OMC=Orion Molecular Cloud, ONC=Orion Nebula Cluster, EG=extragalactic.}\n\\item[$^e$]{From visual inspection of the X-ray light curves\npublished by \\citet{getman05a}. The parentheses denote that the\ndetection is uncertain.}\n\\item[$^f$]{Although X-ray emission from wind shocks would be expected from the massive star associated with source I, the\nnon-detection by Chandra is likely due to the presence of a nearly edge-on disk (\\citealt{matthews10}) that absorbs the X-ray emission.}\n\\item[$^g$]{As discussed in Section \\ref{variability-months}, the source C is filtered out by the more extended VLA configuration observations. To compute the average flux, the variability parameter and the number of detections at 33.6 GHz, we have considered the images after smoothing to a C-configuration resolution of 0.8$\\arcsec$.}\n\\item[$^h$]{The massive BN object has an X-ray counterpart, COUP\n599b, much fainter than a low-mass companion located 0.9$\\arcsec$\nfrom BN, COUP 599a (\\citealt{grosso05}).}\n\\item[$^h$]{The radio source 13 identified by \\citet{zapata04a} does not have optical or IR counterpart, but it has an X-ray counterpart (COUP 607) with low number of counts, which prevent the derivation of $N_{\\rm H}$. \\citet{rivilla13b} showed that this sources is likely driving a molecular outflow in the OMC1-S region, and hence we classified it as a star embedded in the OMC.}\n\\item[$^i$]{COUP 780 does not have optical counterpart and exhibits high extinction ($\\log N_{\\rm H}$=22.83 cm$^{-2}$). Given its\nlocation, we propose that it is a particularly embedded ONC star that belongs to the Trapezium cluster surrounding the massive star\n$\\theta^{1}$ {\\it Ori C} (\\citealt{rivilla13a}), rather than an OMC member.}\n\\end{tiny}\n\\end{list}\n\\end{table*}\n\n\\begin{table}\n\\caption{Radio variability on hour timescales between the two observations on 2010 Nov 23.}\n\\label{table-hours}\n\\tabcolsep 5.pt\n\\centering\n\\begin{tabular}{c c c}\n\\hline\\hline\nSource & $F_{\\rm 523.89}\/F_{\\rm 523.76}$ & $\\beta$ \\\\\n\\hline\\hline\nBN & 1.05 & 0.03 \\\\\nI & 1.4 & 0.22 \\\\\nn & 2.9 & 0.69 \\\\\nF & 3.0 & 0.70\\\\\n7 & 3.0 & 0.70 \\\\\nA & 0.7 & 0.30 \\\\\nC & 0.95 & 0.04\\\\\n15 & 0.95 & 0.04\\\\\n6 & 1.17 & 0.11\\\\\n12 & 2.4 & 0.59\\\\\n\\hline\\hline\n\\end{tabular}\n\\end{table}\n\n\\section{Comparison with lower radio frequency and connection to X-ray emission}\n\\label{full-sample}\n\nWith the aim of better understanding the nature of the radio emission from young stars of the ONC\/OMC region, in this section we compare the properties of the 33.6 GHz emission from our monitoring with those of the 8.3 GHz emission from \\citet{zapata04a}. These authors analyzed VLA observations in 4 different epochs within a larger FoV and cadence $\\sim$1 year.\nWe have cross-correlated our sample of 18 radio sources emitting at 33.6 GHz\nwith their catalog of radio sources. \n\n\nFurthermore, to explore the link between the radio and X-ray emission in a statistically significant sample, we study the full sample of sources emitting at both wavelengths in Orion. We have cross-correlated the full sample radio sources\nwith the catalog of X-ray stars provided by the very deep Chandra Orion Ultra Deep Project (COUP, \\citealt{getman05a}). \n\n\\begin{figure}\n\\centering\n\\includegraphics[angle=90,width=9.5cm]{membership-6questions.ps}\n\\caption{Scheme of the method used to classify the sources as ONC or OMC members, or EG sources. We use cross-correlation with optical, IR and X-ray stellar catalogs, and comparison with the spatial distribution of molecular material (see text).}\n\\label{fig-scheme-membership}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[angle=0,width=8.5cm]{spatial-distribution.eps}\n\\caption{Spatial distribution of the full sample of sources used in our analysis. Red and blue dots indicate members of the ONC and OMC, respectively. The ONC sources related with proplyds are denoted by large blue circles. The radio\nsource classified as EG falls outside the region shown. The greyscale-scale shows the IR K-band image\nfrom the 2 Micron All Sky Survey (2MASS). The dashed contours trace gas from the OMC (CN N=1$-$0 emission from \\citealt{rodriguez-franco98}), from 12 K km s$^{.1}$ in steps of 4 K km s$^{.1}$. The dotted contours correspond to emission at 850 $\\mu$m from \\citet{difrancesco08}. The large circle indicates the primary beam of the 33.6 GHz observations.}\n\\label{fig-spatial-distribution}\n\\end{figure}\n\nThe ONC\/OMC region harbors different populations of radio stars: i) members of the optically visible foreground ONC illuminated by the massive stars of the Trapezium (some of them associated with proplyds); ii) stars still embedded in the background OMC in a earlier evolutionary stages; and iii) extragalactic (EG) sources. Since the emission mechanisms of each group could be different, we distinguish in our study these different populations.\n\n\n\n\\subsection{Source membership}\n\\label{membership}\n\n\nTo obtain a rough estimate of the number of expected EG sources we follow \\citet{fomalont02}, who estimated the expected number of EG contaminants at 8 GHz. \nIn a typical FoV at 8 GHz, the expected EG contamination is $\\sim$1 source (\\citealt{zapata04a}).\nIn the smaller FoV of our 33.6 GHz monitoring, we would expect an even lower EG contamination of $\\sim$0.01 sources at 8 GHz for the most sensitive of our observations and a detection\ncriterion of 5$\\sigma$. Since these extragalactic sources generally show non-thermal emission of the type $F \\propto \\nu^{\\alpha}$ with $\\alpha < 0$ between 1$-$100 GHz (\\citealt{condon92}), the contamination at 33.6 GHz is expected to be even lower. We therefore conclude that the impact of EG contamination in our monitoring is very low. In the case of the \\citet{zapata04a} observations at 8.3 GHz, they inferred $\\sim$1 EG source.\n\nWe classify the radio sources into 4 groups: ONC members without evidence of proplyds (``naked'' ONC), ONC members associated with proplyds, embedded OMC members and EG contaminants.\nWe cross-check the radio sources with available stellar catalogs in the optical (\\citealt{odell94,hillenbrand97,kastner05}), infrared (\\citealt{hillenbrand00}) and X-rays (COUP, \\citealt{getman05a}). Additionally, we compare the position of the sources with the spatial distribution of the OMC, traced by the CN N=1$-$0 emission from \\citet{rodriguez-franco98}. A scheme explaining the method used to classify the sources is presented in Fig. \\ref{fig-scheme-membership}. It is based on 6 steps:\n\n\\begin{figure*}\n\\centering\n\\includegraphics[angle=0,width=5.55cm]{Fav-Fav.eps}\n\\hspace{0.4cm}\n\\includegraphics[angle=0,width=5.55cm]{zz-deltaFs.eps}\n\\hspace{0.4cm}\n\\includegraphics[angle=0,width=5.55cm]{zz-betas.eps}\n\\caption{Comparison of radio properties at 33.6 and 8.3 GHz. Red and blue dots indicate members of the ONC and OMC, respectively. The ONC sources related with proplyds are denoted by large blue circles. {\\it Left panel:} Mean flux density at 33.6 GHz versus the mean flux density\nat 8.3 GHz. For those sources that fall within the FoV of our 33.6\nGHz observations but are only detected at 8.3 GHz, we have included\n3$\\sigma$ upper limits (blue solid triangles; and large empty triangles if they are related with proplyds). The dashed lines show the relation expected for different types of emission: optically-thin\nthermal dust (spectral index $\\alpha \\sim 3$); optically-thick,\nionized, stellar wind ($\\alpha=0.6$); optically-thin free-free\n($\\alpha = -0.1$); and non-thermal synchrotron ($\\alpha=-0.7$). We\nalso indicate with a solid line the case of a flat spectrum\n($\\alpha=0$). \n{\\em Middle panel:} Comparison between $\\Delta F$ at 8.3 GHz and 33.6 GHz. The symbols are the same as in the left panel.\n{\\em Right panel:} Comparison between the variability parameters $\\beta$ at 8.3 GHz\nand 33.6 GHz. The symbols are the same as in the left panel.\n}\n\\label{fig-radio-radio}\n\\end{figure*}\n\n\n\\begin{enumerate}\n\n\\item{The presence of optical counterpart indicates that the source is likely an ONC member. Since it is possible that some of the sources identified in this way are foreground field stars not related with the young cluster, we have additionally cross-correlated the radio sample with the list of 16 field stars from \\citet{getman05b}, without any coincidence.}\n\n\\item{The presence of IR counterpart indicates that the source is likely an ONC or OMC star, because EG sources are expected to be weaker IR sources.}\n\n\\item{The value of hydrogen column density $N_{\\rm H}$ derived from X-rays is in general a good indicator to discriminate between ONC and OMC members. A source with log$N_{\\rm H}<$22.5 cm$^{-2}$ is considered as ONC member (\\citealt{rivilla13a}).}\n\n\\item{The presence of X-ray variability is a good indicator to determine if a highly extincted source without IR counterpart is an embedded OMC member or an EG source, because young stars are expected to exhibit much higher X-ray variability. Following \\citet{getman05a} we consider 3 signposts of X-ray radio variability: i) the significance of a Kolmogorov-Smirnov test ($P_{\\rm KS}$), which establishes whether variations are above those expected from Poisson noise associated with a constant source; ii) the number of segments of the Bayesian block parametric model ($BBNum$) of source variability developed by \\citet{scargle98}; iii) visual inspection of the X-ray light curves. We consider that a source is X-ray variable when $P_{\\rm KS}<-2.0$ and\/or $BBNum \\geq 2$ and\/or it exhibits X-ray flares in the light curves. We have obtained the values of $P_{\\rm KS}$ and $BBNum$, and visually examined the X-ray light curves from \\citet{getman05a} (see Table \\ref{table-big}).}\n\n\\item{Finally, we compare the position of the sources with respect to the location of the OMC, traced by the CN N=1$-$0 emission from \\citet{rodriguez-franco98} (see contours in Fig. \\ref{fig-spatial-distribution}).}\n\n\\end{enumerate}\n\nAdditionally, to identify the ONC stars related to proplyds, we have cross-correlated the radio sample with the catalog of proplyds from \\citet{kastner05}, using a counterpart radius of 0.5$\\arcsec$. \nThe resulting membership classification is shown in the last column of Table \\ref{table-big}.\nFig.\\ \\ref{fig-spatial-distribution} shows the spatial distribution of the different groups of sources. \n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[angle=0,width=8.9cm]{ratio_deltaF_Fav_8.eps}\n\\hspace{0.5cm}\n\\includegraphics[angle=0,width=8cm]{ratio_deltaF_Fav_33.eps}\n\\caption{Radio properties at 8.3 GHz (left) and 33.6 GHz (right) of the radio\/X-ray sample. The black diamond corresponds to the EG source. The other colors and symbols are the same as in Fig. \\ref{fig-spatial-distribution}. The dashed lines indicate values of $\\beta$ of 1, 0.1 and 0.01.}\n\\label{fig-ratio-deltaF-Fav}\n\\end{figure*}\n\n\n\\subsection{Sample of sources emitting at 8.3 and 33.6 GHz}\n\\label{comparison-zapata}\n\n\nThe cross-correlation between our radio sample at 33.6 GHz and the radio sample at 8.3 GHz, with a counterpart radius of 0.5$\\arcsec$, shows that all the 33.6 GHz sources were also detected at 8.3 GHz, with the only exception of the new radio flaring source OHC-E. On the other hand, our monitoring did not detect emission from 25 sources from \\citet{zapata04a} located within our FoV.\nThe radio properties at both radio frequencies are summarized in Table \\ref{table-big}.\nIn the left panel of Fig. \\ref{fig-radio-radio} we compare the average flux densities at both frequencies. We have distinguished ``naked\" ONC stars, ONC stars related with proplyds and OMC stars. For those sources without 33.6 GHz detection, we have considered 3$\\sigma$ upper limits measured in our image with best sensitivity.\n\n\nIn general, the radio sources detected by our monitoring exhibit higher 8.3 GHz flux densities than those undetected. \nThis suggests that the non-detection at high frequency is likely due to lack of sensitivity.\n\nWith only one exception, the radio sources have higher average flux densities at 33.6 GHz. In the case of sources BN, I and C, we interpret that this is due to ``quiescent\" thermal component that increases with frequency. In the other sources, this does not directly imply evidence of thermal origin, because the observations are not simultaneous and their emission is highly variable (Section \\ref{results}).\nSince this variability is likely connected to non-thermal processes, the value of $F_{\\rm av}$ is very sensitive to the presence of flares, and not representative of their thermal emission.\n\nIn the middle and right panels of Fig. \\ref{fig-radio-radio} we compare the values of $\\Delta F$ and $\\beta$ at 8.3 and 33.6 GHz. Most of the sources appear more variable at higher frequencies (15\/17 have higher $\\Delta F$; and 11\/17 have higher $\\beta$). This may indicate that the radio variability increase with frequency, although new observations of a more statistically representative sample of sources are needed to confirm this behavior.\n \nWe note that even the sources related with proplyds, which are expected to emit non-variable free-free or dust emission from circumstellar material, are clearly variable. This points toward the presence of a non-thermal component arising from the central PMS stars. \n\n\\subsection{Full sample of sources emitting radio and X-rays}\n\\label{comparison-COUP}\n\nWe have cross-correlated of the sample radio sources (detected at 8.3 GHz and\/or at 33.6 GHz\\footnote{We have also included the flaring radio source ORBS detected by \\citet{forbrich08} at 22 GHz.}) with the COUP catalog, searched for X-ray counterparts within 0.5$\\arcsec$ of the radio sources. We have obtained a final sample of 51 sources emitting at radio and X-rays. The properties of the X-ray counterparts are shown in Table \\ref{table-big}.\n\n\\subsubsection{Radio properties}\n\nIn Fig. \\ref{fig-ratio-deltaF-Fav} we show the radio properties of the subsamples of X-ray sources with emission at 8.3 and 33.6 GHz. \nThe ONC sources related with proplyds exhibit low absolute variability at 8.3 GHz ($\\Delta F \\lesssim $ 1 mJy) compared to ``naked\" ONC stars, which appear more variable ($\\Delta F \\gtrsim$ 1 mJy). This suggests that the 8.3 GHz emission in the proplyds is dominated by thermal nearly constant emission from circumstellar material, while ``naked\" ONC stars are might be dominated by variable non-thermal processes related with magnetic activity of the PMS star. This could be gyrosynchrotron emission produced by the acceleration of electrons in magnetic field reconnection events in the corona of the star (\\citealt{andre96}).\n\nHowever, we note that most of the proplyds with lower $F_{\\rm av}$ have values of $\\beta>$0.1 (left panel of Fig. \\ref{fig-ratio-deltaF-Fav}), because their $\\Delta F$ is a significant fraction of their $F_{\\rm av}$. Therefore, it seems that the 8.3 GHz emission from proplyds, although dominated by thermal emission, could include also a non-thermal component arising from the central PMS stars. \n\nThe OMC members show low values of absolute variability at 8.3 GHz ($\\Delta F < $ 1 mJy). However, since most of them are weak sources, this low absolute variation translates into significant relative variability, with values of $\\beta>$0.1, also pointing to non-thermal origin. \nOnly BN is nearly constant, with low values of both $\\Delta F$ and $\\beta$, as occurs at 33.6 GHz. This confirms that the radio emission is likely dominated by thermal emission from the ionized gas in the massive envelope around the star.\n\nThe right panel of Fig. \\ref{fig-ratio-deltaF-Fav} shows that all sources appear in general more variable at 33.6 GHz (both in $\\Delta F$ and $\\beta$). This may indicate that the emission at higher frequencies is more dominated by non-thermal highly variable processes.\n\nTherefore, we conclude that the radio emission can be attributed to two different mechanisms: i) highly-variable (flaring) non-thermal radio gyrosynchrotron emission produced by accelerated electrons in the magnetospheres of low-mass PMS stellar members of both the ONC and OMC; and ii) non-variable thermal emission from the ionized gas and heated dust of the ONC proplyds illuminated by the Trapezium Cluster, or from the ionized gas in the envelopes surrounding massive stars, as in the case of the BN object.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[angle=0,width=8.8cm]{NH-MedE.eps}\n\\hspace{0.5cm}\n\\includegraphics[angle=0,width=8cm]{NH-HR1.eps}\n\\caption{Relation between the hydrogen column density $N_{\\rm H}$ and the median energy of X-ray photons ($MedE$, left panel) and the hardness ratio ($HR_{\\rm 1}$, right panel) of the members of the radio\/X-ray sample. The dashed line in left panel is the empirical fit found by \\citet{feigelson05}. The colors and symbols are the same as in Fig. \\ref{fig-ratio-deltaF-Fav}.}\n\\label{fig-MedE-HR}\n\\end{figure*}\n\n\n\\subsubsection{X-ray properties}\n\n\\vspace{0.2cm}\n\n{$\\bullet$ \\em Hydrogen column density $N_{\\rm H}$, Hardness ratio ($HR_{\\rm 1}$) and Median energy ($MedE$):} in Fig. \\ref{fig-MedE-HR} the values of median energy ($MedE$) of X-ray photons and the hardness ratio\\footnote{Hardness ratio $HR_{1} = (h-s)\/(h+s)$, where $h$ and $s$ refer to the counts detected in the hard (2.0$-$8.0 keV) and soft (0.5$-$2.0 keV) bands, respectively. Values closer to 1.0 indicate a hard X-ray source, and $-$1.0 a soft X-ray source.} ($HR_{\\rm 1}$) of the full radio\/X-ray sample as a function of $N_{\\rm H}$ are shown. It is clear that the stars embedded in the OMC have higher values ($MedE>$3.5 keV and $HR_{1}>$0.84) than ONC stars. \\citet{feigelson05} found a relation between the median energy of the X-ray photons arising from the stars and the hydrogen column density $N_{\\rm H}$ (left panel of Fig.\\ \\ref{fig-MedE-HR}). This trend is due to an absorption effect: in the embedded sources, only the harder photons are able to escape through the molecular gas and then be detectable, while softer ones (with lower energy) are absorbed. As a consequence, the embedded stars appear as harder sources. \nAmong the ONC members, those related with proplyds exhibit higher values of $N_{\\rm H}$ (and hence of $MedE$ and $HR_{\\rm 1}$). This is likely due to the presence of circumstellar material that produces higher extinction than in ``naked\" ONC stars.\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[angle=0,width=8cm]{Fav-Lx-8.eps}\n\\hspace{0.5cm}\n\\includegraphics[angle=0,width=7.0cm]{Fav-Lx-33.eps}\n\n\\vspace{0.5cm}\n\\includegraphics[angle=0,width=8.0cm]{zzz-deltaF-Lx-8.eps}\n\\hspace{0.5cm}\n\\includegraphics[angle=0,width=7cm]{zzz-deltaF-Lx-33.eps}\n\n\n\\vspace{0.5cm}\n\\hspace{0.1cm}\n\\includegraphics[angle=0,width=7.75cm]{zzz-beta-Lx-8.eps}\n\\hspace{0.5cm}\n\\includegraphics[angle=0,width=7cm]{zzz-beta-Lx-33.eps}\n\n\\caption{Radio properties ($F_{\\rm av}$, $\\Delta F$, $\\beta$) at 8.3 and 33.6 GHz versus the value of the X-ray luminosity derived from the X-ray COUP counterparts. The colors and symbols are the same as in Fig. \\ref{fig-ratio-deltaF-Fav}.}\n\\label{fig-radio-Lx}\n\\end{figure*}\n\n\n\\vspace{0.5cm}\n\n \n{$\\bullet$ \\em X-ray luminosities}: In Fig. \\ref{fig-radio-Lx} we show the radio properties of the radio\/X-ray sample as a function of the extinction-corrected X-ray luminosity $L_{\\rm X}$. We find that $L_{\\rm X}^{ONC-proplyds} < L_{\\rm X}^{OMC} < L_{\\rm X}^{ONC-naked}$. Although this result could be physically real, we note that it should be taken with caution, because the derivation of the extinction-corrected luminosity is more uncertain for highly extincted sources. \n\nWe also investigate whether there is a relation between the radio properties and the X-ray luminosity in the radio\/X-ray sample. \\citet{forbrich13} remarked that there is no clear correlation between the radio flux densities measured by \\citet{zapata04a} at 8.3 GHz and the COUP X-ray luminosities (upper left panel in Fig. \\ref{fig-radio-Lx}). Similarly, we do not see a trend in our subsample at 33.6 GHz (upper right panel in Fig. \\ref{fig-radio-Lx}). However, if we do not consider the proplyds a tentative trend appears: the radio flux density generally increases with the X-ray luminosity. This suggests a relation between the X-ray and radio emissions for the ``naked\" ONC and OMC sources. In the case of proplyds, a significant fraction of radio emission is expected to arise from gas ionized by external illumination, which is not linked to X-ray activity. Then, a relation between the radio flux and X-ray luminosity is not expected in proplyds, as observed in Fig. \\ref{fig-radio-Lx}.\n\nWe have also studied the fraction of X-ray sources detected at radio wavelengths as a function of the X-ray luminosity (Fig. \\ref{fig-ratio-radio-X}) for the 8.3 and 33.6 GHz monitorings. In each case, we have considered the COUP sources that fall within the FoV of each observation with enough counts so that the X-ray luminosity corrected for extinction could be reported. We find 159 X-ray sources within our FoV and 595 sources within the 8.3 GHz FoV. Both samples show a very similar behavior. The weaker X-ray sources with log$L_{\\rm X}< 28.5$ erg s$^{-1}$ (200 sources, i.e., 13$\\%$ of the full COUP sample) were not detected by our radio monitoring, while \\citet{zapata04a} only detected 1 source. Fig. \\ref{fig-ratio-radio-X} shows that the fraction of X-ray sources detected in the radio increases with X-ray luminosity. This indicates that the radio observations have statistically detected the most luminous X-ray sources. This suggests that: i) the underlying mechanisms responsible for the X-ray and (at least some fraction of) the radio emission are somehow related; and ii) radio monitorings have been limited in the past due to sensitivity, confirming the conclusion of \\citet{forbrich13}. \n\nThe middle and lower panels of Fig. \\ref{fig-radio-Lx} also show tentative trends between the radio variability ($\\Delta F$ and $\\beta$) and the X-ray luminosity for the sources not related with proplyds. In general, the radio variability increases towards higher $L_{\\rm X}$.\n\nHowever, we note that the number of sources of the ``naked\" ONC and OMC subsamples is too low to draw robust general conclusions. Deeper radio observations detecting a much larger number of sources are needed to confirm these tentative trends.\n\n\\begin{figure}\n\\centering\n\\includegraphics[angle=0,width=8cm]{ratio-radio-X-zap04-our.eps}\n\\caption{Fraction of X-ray COUP sources detected by our radio\nmonitoring at 33.6 GHz (solid line) and \\citet{zapata04a} monitoring\nat 8.3 GHz (dashed line) as a function of the absorption-corrected\ntotal X-ray luminosity $L_{\\rm X}$. In each case, we have considered\nthe COUP sources that fall within the FoV and with enough counts\nso that the X-ray luminosity corrected for extinction could be\nreported: 159 for our monitoring and 595 for the 8.3 GHz monitoring\n(see also \\citealt{forbrich13}).}\n\\label{fig-ratio-radio-X}\n\\end{figure}\n\\begin{figure*}\n\\centering\n\\includegraphics[angle=0,width=8cm]{zzz-Fav-BBNum-8.eps}\n\\hspace{0.5cm}\n\\includegraphics[angle=0,width=7cm]{zzz-Fav-BBNum-33.eps}\n\\vspace{0.5cm}\n\n\\includegraphics[angle=0,width=8cm]{zzz-deltaF-BBNum-8.eps}\n\\hspace{0.5cm}\n\\includegraphics[angle=0,width=7cm]{zzz-deltaF-BBNum-33.eps}\n\\vspace{0.5cm}\n\n\\includegraphics[angle=0,width=7.75cm]{zzz-betas-BBNum-8.eps}\n\\hspace{0.5cm}\n\\includegraphics[angle=0,width=7cm]{zzz-betas-BBNum-33.eps}\n\n\n\\caption{Radio properties ($F_{\\rm av}$, $\\Delta F$, $\\beta$) at 8.3 and 33.6 GHz versus the value of the $BBNum$ derived from the X-ray COUP counterparts, which is a proxy for the X-ray variability. The colors and symbols are the same as in Fig. \\ref{fig-ratio-deltaF-Fav}.}\n\\label{fig-radio-BBNUM}\n\\end{figure*}\n\n\\vspace{0.5cm}\n\n{$\\bullet$ \\em X-ray variability}: We now study whether there is a relation between the radio and the X-ray variability. We follow the 3 conditions presented in Sect. \\ref{membership} to consider a source X-ray variable: $P_{\\rm KS}<-2.0$ and\/or $BBNum \\geq 2$ and\/or presence of X-ray flares in the light curves (see Table \\ref{table-big}). Only one radio source (E) of the 33.6 GHz\/X-ray subsample does not exhibit X-ray variability. Namely, 93$\\%$ of the stars of this subsample are X-ray variable. Regarding the full radio\/X-ray sample, 85$\\%$ of the stars are X-ray variable. These fractions are higher than the fraction of X-ray variable sources of the full COUP sample, which is $\\sim$60$\\%$ (\\citealt{getman05a}). Therefore, we have found that the radio sources are mostly associated with X-ray variable stars. This supports that (at least some of the) radio emission might be related to the same magnetic events in the coronae of PMS low-mass stars that also produce the X-ray emission.\n\nIn Fig. \\ref{fig-radio-BBNUM} we show the radio properties of the radio\/X-ray sample as a function of the level of X-ray variability, quantified by $BBNum$. As in Fig. \\ref{fig-radio-Lx}, tentative trends appear if we do not consider the proplyds. However, given that the X-ray and radio observations were carried out at different epochs and different sensitivities, no firm conclusions can be drawn. Unlike the very deep COUP observation, which continuously observed the region during $\\sim$10 days, the radio monitoring runs are much shorter, yielding a lower probability to detect large flux density variations such as flares. To better understand if radio and X-ray variability are directly related, simultaneous observations will be needed.\n\n\\section{The new radio source embedded in the Orion Hot Core: OHC-E}\n\\label{discussionE}\n\nOur observations have detected twice a new radio source, OHC-E (2009\nMarch 19 and 2011 July 09, see Fig.\\ \\ref{fig-OHC-E}). Neither\n\\citet{felli93} nor \\citet{zapata04a}, who performed observations\ncovering 7 months and 4 years at 5 and 15 GHz and 8.3 GHz, respectively,\nnor \\citet{goddi11}, who observed the same region at 45 GHz two\nmonths before our detection (2009 January 12), detected emission\ntowards this source.\n\nThe source is not resolved by the beam of our observations in any\nof our two detections ($\\sim 0.2\\arcsec$ and $\\sim 0.075\\arcsec$).\nWe can set upper limits to the deconvolved size of the emission of\n$<0.1\\arcsec$ ($<40$ AU) for the\nQ-band emission and of $<0.04\\arcsec$ ($< 10$ AU) for the Ka-band emission. Unfortunately, it is not\npossible to determine the spectral index of the emission, because\nobservations at several radio wavelengths were not carried out\nsimultaneously. We derived lower limits to the brightness temperature\nof $\\sim 460$ K and $\\sim 615$ K from the source flux density at\n45.6 GHz and 33.6 GHz, respectively, and the upper limits to the\nsource sizes. Although these temperatures are consistent with both\nthermal and non-thermal emission, our monitoring shows that the\nemission is highly variable, suggesting a non-thermal origin.\n\nThe position of the radio flare coincides with an embedded X-ray\nlow-mass pre-main sequence (PMS) star COUP 655 (Fig. \\ref{fig4}).\nThe source OHC-E is also located very close to the southeast member of a binary stellar\nsystem (CB4, Fig. \\ref{fig4}) observed with high angular\nresolution by the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) onboard the Hubble Space telescope (HST) (see also \\citealt{stolovy98}; \\citealt{simpson06}).\nTherefore it seems that both the radio and the X-ray emission are related to this star. We can give a rough estimate for the mass of this star using\nthe canonical relation for X-ray young stars, log[$L_{\\rm X}$\/$L_{\\rm\nbol}$]=$-$3.0 (\\citealt{pallavicini81}). Assuming an stellar age typical for a massive star forming region of 5$\\times$10$^{5}$ yr, OHC-E would have a mass of $\\sim$1 M$_{\\odot}$ (using the \\citealt{siess00} stellar models), supporting that it is a low-mass star. Hence, the variability observed both in radio and X-rays wavelengths can be related with magnetic activity in the corona of this PMS low-mass star.\n\n\\begin{figure}\n\\centering\n\\includegraphics[angle=0,width=8.0cm]{OHC-E-zoom.eps}\n\\caption{2.15 $\\mu$m HST-NICMOS image (color scale) of the region where OHC-E was detected, from the Hubble Legacy Archive (HLA). An IR binary is detected toward this region, CB4 (see also \\citealt{stolovy98,simpson06}). The green contours correspond to the radio emission at 45.6 GHz detected on our 2009 March 19 image (5$\\sigma$, 10$\\sigma$ and 15$\\sigma$). The positions of the X-ray star COUP 655 is indicated with the white plus sign.}\n\\label{fig4}\n\\end{figure}\n\n\\section{Source 12: the binary $\\theta^{1}$ {\\it Ori A}}\n\\label{source-12}\n\n$\\theta^{1}$ $Ori$ $A$ is a well known binary system, with a B0.5\nprimary and a low-mass companion (e.g., \\citealt{close13}).\nAs observed in the binary WR140 (\\citealt{williams90}), the binarity\ncan produce a smooth periodic variation of the radio emission caused\nby the combination of two geometrical effects: i) variation of the\nfree-free opacity due to the ionized envelope of the primary star\nas the companion orbits; ii) variation of the stellar activity (and\nhence the non-thermal emission) inversely proportional to the\nseparation between components.\n\n\\citet{felli91,felli93} monitored the radio emission from this\nbinary at 5 and 15 GHz.\nWe plot in Fig.\\ \\ref{fig-source12} their flux\ndensities and those of our monitoring as a function of the orbital phase $\\phi$. We\nhave used the orbital parameters $P$=65.4325 and\n$T_{\\rm 0}$=JD 2446811.95 (\\citealt{bossi89}). The flux density at 5 and 15 GHz peaks near periastron ($\\phi\\sim$0.15), \nwith lower levels at $\\phi\\sim$0.1 and $\\phi=$0.6 to\n0.9. Our higher frequency flux densities are\nconsistent with those at lower frequencies, with a peak after\nperiastron at $\\phi\\sim$0.2. As \\citet{felli93} noted and our data\nat higher frequency confirm, the orbital modulation model may make\nthe main peak be detected always at the same position $\\phi\\sim$0.15$-$0.2.\nHowever, the presence of large scatter in the radio emission cannot\nbe explained with this model. \\citet{felli93} set an upper limit\nfor the variability timescale of 10 to 20 days. Our monitoring has\nrevealed variability at shorter timescales of hours (Section\n\\ref{short-variability}). This is in agreement with non-thermal\nemission due to stellar activity, perhaps in addition to orbital\nmodulation. Therefore, we conclude that although certain orbital\nmodulation may be present, producing the observed flux density peak,\nthere is also a non-thermal emission component arising likely from\nthe low-mass companion that varies independently of the orbital\nphase.\n\n\\begin{figure}\n\\centering\n\\includegraphics[angle=0,width=8cm]{fig-source12.eps}\n\\caption{Flux densities of source 12 as a function of the orbital\nperiod at 33.6 GHz (red dots, this paper), 5 and 15 GHz (blue squares\nand green triangles, respectively, from \\citealt{felli91,felli93}).}\n\\label{fig-source12}\n\\end{figure}\n\n\\section{Rate of radio flaring activity}\n\\label{flaring-rate}\n\n\nThere are only a few radio flares detected from young stellar objects. This could indicate that these phenomena are rare events, or alternatively, that the sensitivity of the observations carried out so far has limited their detection. Our monitoring allows us to evaluate which is the most likely scenario. We have detected flares in sources OHC-E, F, n, 7 and 12. Other sources (e.g. D, E, G, 6, 11, 12 and 25) showing high levels of variability through different epochs separated by longer timescales, might be also flaring sources, but with our data we are not able to confirm short-term variability.\n\n\\citet{bower03} parametrized the rate of radio flares (flares day$^{-1}$) in the entire ONC\/OMC region based on their data and the previous observations of variable sources by \\citet{felli93}. Since the number of flares depends on the field of view (FoV) of a particular observation, and of the stellar density of the observed region, we reformulate the expression for the rate of radio flares, $N_{\\rm RF}$, to:\n\n\\begin{equation}\n\\label{flarerate}\n\\centering\nN_{\\rm RF}=\\gamma \\left(\\frac{F_{\\rm 3\\sigma}}{100\\,mJy}\\right)^{\\alpha} \\left(\\frac{A_{\\rm FoV}}{A_{\\rm tot}}\\right) \\left(\\frac{\\Sigma_{\\rm FoV}}{\\overline{ \\Sigma}}\\right),\n\\end{equation}\n\n\\noindent where $F_{3\\sigma}$ is the threshold detection limit\n(considered as 3 times the RMS of the observation); $\\alpha$ is\nthe spectral index of the emission; $A_{\\rm FoV}$ and $A_{\\rm tot}$\nare the areas of the observed region and the full ONC\/OMC region,\nrespectively; and $\\Sigma_{\\rm FoV}$ and $\\overline{\\Sigma}$ are\nthe surface stellar density in the observed region and the mean\nstellar density of the entire ONC\/OMC, respectively. The parameter\n$\\gamma$ is a constant that \\citet{bower03} estimated to be between\n0.01 and 0.1.\n\nSince the flare rate is a function of the stellar density, it is\nconvenient to observe the more crowded region of a cluster to enhance\nthe chances of detecting radio flares. The region with highest\nstellar density in Orion is the OHC (\\citealt{rivilla13a}). Considering\na $\\sim$25$\\arcsec\\times$25$\\arcsec$ region centered in the stellar\ndensity peak, using the typical power-law index of non-thermal\nemission of $\\alpha=-1$, the mean RMS noise of our monitoring (0.3\nmJy), the approximated size of the ONC\/OMC cluster of 15 pc$\\times$15\npc, and the census of stars in the region provided by COUP, we\nobtained that $N_{\\rm RF}\\sim$0.017-0.17 flare day$^{-1}$.\n\nWe have detected 2 clear flares (OHC-E and source n\\footnote{The\ndetections of sources H and D in the OHC region show also tentative\nevidences of flaring emission towards these stars (see Section\n\\ref{comparison-zapata})}.) in this area in 14 observations, so a\nrough estimate of the flaring rate is $\\sim$0.14 flare observation$^{-1}$.\nAssuming that the typical duration of the radio flares is several\nhours to days (\\citealt{andre96,bower03,forbrich08}), this would\nbe approximately equivalent to 0.14 flare day$^{-1}$, which is\nsimilar to the upper value from Eq.\\ \\ref{flarerate}. Therefore,\nour results suggest that $\\gamma$ is closer to 0.1 rather than to\n0.01, and consequently that the number of detectable flares is\nsignificant. Therefore, our multi-epoch monitoring confirms that\nthe presence of radio flares is not a rare phenomenon in crowded\nyoung stellar clusters.\n\nObviously, the detection of radio flares in the OHC is favored by\nits high density of embedded low-mass stars. But, according to Eq.\\\n\\ref{flarerate}, even in less dense regions the number of detectable\nflares would be significant, especially if the sensitivity is\nenhanced. The improved capabilities of the VLA and ALMA may be\nexpected to reveal many more radio flares arising from young low-mass\nstar clusters. A single polarization ALMA observation at 90 GHz\n(band 3) with a full bandwidth of 7.5 GHz and 50 antennas, can reach\na 8 $\\mu$Jy sensitivity limit in only 2.3 hours of on-source observing\ntime\\footnote{According to the ALMA sensitivity calculator available in\nthe ALMA Observing Tool.}\nConsidering a FoV of $\\sim$25$\\arcsec\\times$25$\\arcsec$, this ALMA\nobservation may find $\\sim$ 6 radio flares day$^{-1}$, which\nrepresents $\\sim 25$\\% of the X-ray sources detected by Chandra in\nthe region. This would confirm that radio flares are common events,\nsimilarly to the X-ray flares detected by Chandra.\n\nFuture observations are clearly needed to derive a better estimate\nof this radio flaring rate, through the detection of many more radio\nflares. The flaring information for tens or even hundreds of PMS\nstars will provide a complete statistical description of radio\nshort-term variability. Beyond the purely scientific interest, this\nwould have important technical implications for interferometric\nimaging (\\citealt{bower03}). The classical interferometric imaging\ntechniques assume a constant sky in image reconstruction. However,\nthis assumption would be violated by the presence of many variable\nsources in the field. This would lead to a reduced dynamic range\nof the image (\\citealt{stewart11}). Also, it would become difficult\nto concatenate multiple observations of the same region to obtain\ndeeper images. Therefore, a more accurate determination of the radio\nflare rate would help to understand to what extent this may affect\ndeeper VLA and ALMA observations of young stellar clusters.\n\n\\section{Summary and Conclusions}\n\\label{conclusions}\n\nIn this work we have presented a multi-epoch radio monitoring of\nthe ONC\/OMC region carried out with the VLA at high centimeter frequencies (33\nand 45 GHz). We have detected 19 radio sources, mainly concentrated\nin the Orion Hot Core and Trapezium regions. Two of them are related\nwith massive stars: sources BN and I. The flux densities of the BN source and\nsource C (related with a proplyd) are compatible with constant thermal emission. The source I, besides a constant thermal component, shows\ntentative evidence of radio variability at both short- and long-term\ntimescales. The remaining 16 sources show long-term (month-timescale)\nvariability, but it is not yet clear whether these comprise multiple\nshort-term events not covered by our monitoring\ncadence. Indeed, we have confirmed radio flares (i.e., short-term\nradio variability on timescales of hours to days) in 5 sources: F,\n7, n, 12 and the new source OHC-E, previously undetected at radio\nwavelengths. \n\n\nWe have complemented our radio sample with other radio detections at 8.3 GHz\nfrom the literature, and cross-correlated it with the X-ray COUP catalog to obtain the full sample of sources emitting radio and X-ray in the ONC\/OMC region. The radio emission from young stars can be explained by a combination of 2 different mechanisms: i) non-variable thermal emission produced by ionized gas and\/or heated dust from the ONC proplyds and the massive objects BN and I; and ii) variable (flaring) non-thermal gyrosynchrotron emission produced by accelerated electrons in the stellar corona of PMS low-mass members of the ONC and OMC.\nWe have found several hints relating this variable radio emission with the X-ray activity.\n\nOur study of the radio variability of $\\theta^{1} {\\it Ori A}$ concludes that there is evidence of a non-thermal emission component arising likely from the low-mass companion. Moreover, certain orbital modulation may be present in this binary, producing the observed flux density peak.\n\nWe have derived a rough estimate of the radio flaring rate in the densest cluster in the region, which is embedded in the Orion Hot Core. We have obtained\n$\\sim$0.14 flares day$^{-1}$. This value is consistent with a empirical estimate assuming the sensitivity and FoV of our observations and the stellar density of the region. This confirms that radio flares are not rare phenomena during the earliest stages of star formation as previously thought, but relatively common events similarly to the well-known X-rays flares.\n\nOur results have shown that the radio monitorings to date has been strongly\nlimited by sensitivity, detecting mainly those sources with higher\nX-ray luminosity. \nThis implies that the new capabilities of the VLA\nand ALMA offer a unique opportunity to detect a much larger population\nof radio sources in young stellar clusters. New observations with\nimproved sensitivity and better angular resolution will provide\ncrucial information about the origin and nature of the radio emission,\nand they will reveal how radio and X-ray phenomena are connected.\n\nFurthermore, the presence of multiple variable radio sources\nwould have important implications for interferometric\nimaging, since the classical techniques assume a constant sky. \nA more accurate determination of the radio flare rate would help to understand how this variability can affect the upcoming VLA and ALMA observations in young stellar clusters.\n\n\n\\section*{Acknowledgments}\n\\begin{small}\nThis work has been partially funded by Spanish projects\nAYA2010-21697-C05-01 and FIS2012-39162-C06-01, and Astro-Madrid\n(CAM S2009\/ESP-1496), and CSIC grant JAE-Predoc2008. I.J-S.\nacknowledges the funding received from the People Programme (Marie\nCurie Actions) of the European Union's Seventh Framework Programme\n(FP7\/2007-2013) under REA grant agreement number PIIF-GA-2011-301538. J.S-F. acknowledges the funding received from the project AYA2011-30147-C03-03.\nThe National Radio Astronomy Observatory is a facility of the\nNational Science Foundation operated under cooperative agreement\nby Associated Universities, Inc.\n\\end{small}\n\n\\bibliographystyle{mn2e}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nBoson stars were first introduced in the late 1960s in \\cite{BonazP66,Kaup68,RuffiB69}. They consists of a complex scalar field coupled to gravity. Various families of boson stars have been constructed, with various potentials for the scalar field, with or without rotation, in various dimensions (see Refs. \\cite{Jetze92,LeeP92,SchunM03,LieblP12} and references therein)...\n\nOne of the main motivations for the study of boson stars is the fact that they can act as black hole mimickers \\cite{GuzmaR09}. What this means is that they can have a large mass with a small size without developing any horizon or singularity. Moreover, boson stars have no hard surface, the scalar field interacting with ordinary matter only through gravitation. Possible scenarios for the formation of boson stars have been studied. There has also been a number of studies that proposed observational means of discriminating boson stars from black holes. This could, for instance, be done by direct imaging of the Galactic center \\cite{VinceMGGS16}. Observations of gravitational waves could also be a fruitful way to investigate the nature of the compact objects observed in the Universe \\cite{KesdeGK04, MacedPCC13}.\n\nApart from being an alternative to black holes, boson stars are very interesting test beds for a lot of situations in which intense gravitational fields play a role. For instance, a class of peculiar orbits for massive particles is investigated in Ref. \\cite{GrandSG14}. Other specific effects concerning the imaging of accretion disks around boson stars are studied in Refs. \\cite{MeliaVGGMS15, VinceMGGS16}.\n\nThe main goal of the current paper is to investigate the existence of light rings around boson stars. Those are closed (circular) orbits of photons. They can only appear for very compact objects (dubbed ultracompact in Ref. \\cite{CardoCMOP14}). It is known that black holes admit light rings and that they lie outside the horizon. The link between light rings and various instabilities was explored in a recent study by V. Cardoso {\\em et al.} \\cite{CardoCMOP14}. Orbits of photons around boson stars or black holes with scalar hair have already been discussed in Refs. \\cite{CunhaHRR15, CunhaGHRRW16}, especially in the context of imaging those objects. In this work results are expanded to boson stars with higher angular momentum.\n\nIn a recent paper\u00a0\\cite{CardoFP16}, the connection between light rings and the gravitational waves observed by the LIGO observatory \\cite{ligo} is studied. The first oscillations of the ringdown signal are shown to be the signature of the presence of a light ring more than of an apparent horizon. Two objects with exactly the same structure in terms of light rings would generate similar early ringdown signal, regardless of the presence of a horizon.\n\nThis paper is organized as follows. In Sec. \\ref{s:models} the reader is reminded about models of boson stars. In particular, the high compactness of such objects is exhibited. Section\u00a0 \\ref{s:rings} describes the study of light rings by means of an effective potential method. In Sec. \\ref{s:points}, a particular class of light rings, in which the photons are {\\em at rest} and on a stable orbit, is investigated. Those {\\em light points} are a new class of orbits, typical of boson stars. Indeed, their appearance requires an intense gravitational field but also no horizon. In the case of black holes, stationary orbits are located exactly on the horizon and are unstable. Conclusions are given in Sec. \\ref{s:ccl}.\n\n\n\\section{Boson star models} \\label{s:models}\n\nIn this section we recall the setting used in Ref. \\cite{GrandSG14} to compute boson star spacetimes. Boson stars consist of a complex scalar field $\\Phi$ coupled to gravity. This is achieved by considering the action \n\\begin{equation}\\label{e:action}\nS = \\int \\l({\\mathcal L}_g + {\\mathcal L}_\\Phi\\r) \\sqrt{-g} \\rm{d}x^4,\n\\end{equation}\nwhere ${\\mathcal L}_g$ is the Hilbert-Einstein Lagrangian and ${\\mathcal L}_\\Phi$ is the Lagrangian of the complex scalar field. They are given by the standard expressions\n\n\\begin{eqnarray}\n {\\mathcal L}_g &=& \\frac{1}{16\\pi}R \\\\\n{\\mathcal L}_\\Phi &=& -\\frac{1}{2}\\l[\\nabla_\\mu \\Phi \\nabla^\\mu \\bar{\\Phi} + V\\l(\\l|\\Phi\\r|^2\\r)\\r].\n\\end{eqnarray}\n\n$\\nabla$ denotes the covariant derivative associated to ${\\bf g}$, and $R$ is the Ricci scalar. $V\\l(\\l|\\Phi\\r|^2\\r)$ is a potential that can be chosen to construct various types of boson stars. In this paper one considers the simplest possible choice in which the scalar field is a free field, which implies that\n\n\\begin{equation}\nV\\l(\\l|\\Phi\\r|^2\\r) = \\frac{m^2}{\\hbar^2} \\l|\\Phi\\r|^2.\n\\end{equation}\n\n$m$ is the mass of the individual boson and the factor $m\/\\hbar$ then appears as a scale factor for the various quantities (distances, masses, etc). Throughout this paper geometric units are used, such that $G=1$ and $c=1$.\n\nRotating boson stars are computed by demanding that the scalar field takes the form\n\\begin{equation}\n\\Phi = \\phi\\l(r,\\theta\\r) \\exp\\l[i\\l(\\omega t-k\\varphi\\r)\\r],\n\\end{equation}\nwhere $\\phi$ is the amplitude of the field (and so real), $\\omega$ is a real constant (smaller than $m\/\\hbar$) and $k$ an integer known as the rotational quantum number. Because of the $U\\l(1\\r)$ symmetry of the action (\\ref{e:action}), it does not depend on $t$ and $\\varphi$, leading to axisymmetric solutions. This is why the amplitude $\\phi$ depends on $\\l(r,\\theta\\r)$ only. The numbers $\\omega$ and $k$ appear as parameters of the various boson stars. Let us point out that the case $k=0$ corresponds to spherically symmetric boson stars.\n\nFor the metric, quasi-isotropic coordinates are well adapted to the symmetry of the problem. The metric reads\n\\begin{equation}\ng_{\\mu\\nu} {\\rm d}x^\\mu {\\rm d}x^\\nu = -N^2 {\\rm d}t^2 + A^2 \\l({\\rm d}r^2 + r^2{\\rm d}\\theta^2\\r) + B^2 r^2\\sin^2\\theta\\l({\\rm d}\\varphi+ \\beta^\\varphi{\\rm d}t\\r)^2.\n\\end{equation}\n\nIn the 3+1 language, $N$ is the lapse, the shift vector is $\\beta^i= \\l(0,0,\\beta^\\varphi\\r)$ and the spatial metric is $\\gamma_{ij} = {\\rm diag}\\l(A^2, B^2r^2, B^2r^2\\sin^2\\theta\\r)$. The unknowns are the functions $N$, $A$, $B$ and $\\beta^\\varphi$, all of them depending only on $\\l(r,\\theta\\r)$. The unknown fields obey the Einstein-Klein-Gordon system which results from the variation of the action (\\ref{e:action}) with respect to both the metric ${\\bf g}$ and the scalar field $\\Phi$. \n\nThose equations are solved by means of highly accurate spectral methods, implemented by the KADATH library \\cite{kadath}. This enables us to compute sequences of rotating boson stars for $k$ ranging from $0$ to $4$. Numerical methods and tests are given in detail in Ref. \\cite{GrandSG14}. We present here the sequences in a different way: by plotting the compactness of the various configurations. To do so, one needs to define the radius of the boson stars. This cannot be done unambiguously because the scalar field extends up to infinity and those objects have no true surface. In the following, the radius $R$ of the boson star is defined as the radius for which the field is $0.1$ times its maximal value, in the equatorial plane. The precise value of the thus-defined radius obviously depends on the value of the threshold (here $0.1$). However, because of the fast decay of the field far from the origin, it is a small effect. Figure \\ref{f:rad} shows the radii of the various boson star models. One can clearly see the known fact that the size of a boson star increases with $\\omega$ and $k$.\n\n\\begin{figure}[!hbtp]\n\\includegraphics[width=12cm]{rad.pdf}\n\\caption{\\label{f:rad} Radii of the various boson stars, defined as the radius for which the field is $0.1$ times its maximal value, in the equatorial plane.}\n\\end{figure}\n\nWith this definition of the radius, it is possible to compute the compactness which is the adimensional ratio $M\/R$, where $M$ is the standard Arnowitt-Deser-Misner mass. Compactness is shown in Fig. \\ref{f:comp}. First one can notice that the result depends very moderately on the value of $k$. The variations of both the mass and radius with $k$ are such that the compactness depends almost only on $\\omega$. One can also notice that it can reach very high values of the order unity (remember that typical neutron stars have a compactness of about 0.2), illustrating the fact that boson stars are indeed good black hole mimickers. Let us also mention that spherically symmetric boson stars (i.e. for which $k=0$) can only reach compactness of about $0.25$. This is mainly due to the fact that they cannot attain small enough values of $\\omega$, due to the turning point in the sequences (see Fig. \\ref{f:rad} for instance). The results obtained in the case $k=1$ are consistent with those shown in Fig. 2 of Ref. \\cite{HerdeR15}.\n\n\\begin{figure}[!hbtp]\n\\includegraphics[width=12cm]{comp.pdf}\n\\caption{\\label{f:comp} Compactness of the various boson stars.}\n\\end{figure}\n\n\\section{Light rings} \\label{s:rings}\n\nGiven the high compactness of boson stars, they can be expected to have light rings which are closed orbits of photons. In Ref. \\cite{CunhaHRR15} light rings are mentioned (see Fig. 1 in that reference) for both $k=0$ and $k=1$ boson stars, in the context of imaging them. We extend the study of light rings to boson stars with higher $k$. Mathematically speaking one needs to find closed null geodesics of the spacetime. Given the fact that boson stars are axisymmetric objects, one looks for circular orbits in the equatorial plane, as in the case of a Kerr black hole. In this work such orbits are localized using an effective potential method. Let us mention that, in Ref. \\cite{CunhaGHRRW16} similar techniques are employed to study photon orbits, in a somewhat more general setting (i.e. for orbits not confined in the orbital plane). Their results appear to be consistent with this work, in the case $k=1$.\n\nThe effective potential method is a standard technique to study orbits around axisymmetric and stationary objects and it proceeds as follows. Let us call $U^\\mu$ the tangent vector of a photon trajectory. As the orbits are in the equatorial plane, it reduces to $\\l(U^t, U^r, 0, U^\\varphi\\r)$. Boson star spacetimes admit two independent Killing vectors : $\\l(\\partial_t\\r)^\\mu$ and $\\l(\\partial_\\varphi\\r)^\\mu$. Their existence leads to two conserved quantities being the scalar products of $U^\\mu$ with the two Killing vectors:\n\n\n\\begin{eqnarray}\nU_\\mu \\l(\\partial_t\\r)^\\mu &=& \\l(-N^2+ B^2 r^2 \\beta^{\\varphi \\, 2}\\r) U^t + B^2 r^2 \\beta^\\varphi U^\\varphi = -E \\\\\nU_\\mu \\l(\\partial_\\varphi\\r)^\\mu &=& B^2 r^2 \\beta^\\varphi U^t + B^2 r^2 U^\\varphi = L,\n\\end{eqnarray}\nwhere $E$ and $L$ are two constants along the geodesics. From those equations, one can express the components $U^t$ and $U^\\varphi$, as functions of $E$ and $L$ :\n\\begin{eqnarray}\n\\label{e:Ut}\nU^t &=& \\frac{\\beta^\\varphi L + E}{N^2} \\\\\n\\label{e:Up}\nU^\\varphi &=& \\frac{L}{B^2 r^2} - \\frac{\\beta^\\varphi}{N^2}\\l(\\beta^\\varphi L +E\\r).\n\\end{eqnarray}\nInjecting those expressions into the fact that the geodesic is null (i.e. $U_\\mu U^\\mu=0$) leads to an equation on $U^r$. It can be put into the form $\\l(U^r\\r)^2 + V_{\\rm eff} = 0$ where $V_{\\rm eff}$ is an effective potential, given by \n\n\\begin{equation}\n\\label{e:veff}\nV_{\\rm eff} = \\frac{1}{A^2} \\l[-\\frac{\\l(\\beta^\\varphi L +E\\r)^2}{N^2} + \\frac{L^2}{B^2 r^2}\\r].\n\\end{equation}\n\nCircular orbits are such that $V_{\\rm eff}=0$ and $\\partial_r V_{\\rm eff}=0$. The second condition is necessary to prevent being only at the periastron or the apoastron of an elliptic orbit. The first condition leads to\n\\begin{equation}\n\\label{e:EsL}\n\\frac{E}{L} = -\\beta^\\varphi + \\epsilon \\frac{N}{Br},\n\\end{equation}\nwhere $\\epsilon = \\pm 1$. Contrary to the massive particle case, $E$ and $L$ are not constrained independently but only via their ratio. This is basically due to the fact that there is no scale of mass in the case of photons. \n\nThe condition $\\partial_r V_{\\rm eff} = 0$, along with $V_{\\rm eff}=0$ gives an equation involving the radius of the orbit and the various metric fields :\n\\begin{equation}\nI\\l(r\\r) = \\l(\\epsilon \\frac{\\partial_r \\beta^\\varphi}{NB}\\r) r^2 + \\l(\\frac{\\partial_rB}{B^3}-\\frac{\\partial_rN}{NB^2}\\r) r + \\frac{1}{B^2} = 0.\n\\end{equation}\n\nThis is a purely radial equation (remember one works in the equatorial plane), but it is not a second-order equation on $r$, the various fields depending on the radius. Figure \\ref{f:func} shows the functions $I\\l(r\\r)$ for two values of $\\omega$, in the case $k=1$. Let us first mention that choosing $\\epsilon=+1$ leads to functions $I\\l(r\\r)$ that always remain strictly positive. This corresponds to the dashed curves of Fig. \\ref{f:func}. When $\\epsilon=-1$ the functions $I\\l(r\\r)$ can sometimes vanish, depending on the boson star considered. $I\\l(r\\r)$ vanishes for the most relativistic boson stars being the ones with small values of $\\omega$. It follows that those boson stars admit light rings. In Fig. \\ref{f:func}, the boson star $k=1 ; \\omega=0.8$ has no light ring (the solid blue curve is always positive), whereas the boson star $k=1 ; \\omega=0.7$ has two light rings, corresponding to the two values of $r$ at which $I\\l(r\\r)$ vanishes (see the solid black curve).\n\n\\begin{figure}[!hbtp]\n\\includegraphics[width=12cm]{func.pdf}\n\\caption{\\label{f:func} Function $I\\l(r\\r)$ for $k=1$. The blue curves correspond to $\\omega=0.8$ and the black ones correspond to $\\omega=0.7$. The dashed lines denote the cases in which $\\epsilon=+1$ and the solid lines denote the cases in which $\\epsilon=-1$. }\n\\end{figure}\n\nFigure \\ref{f:min} shows the value of the minimum of $I\\l(r\\r)$, in the case $\\epsilon=-1$. When this minimum is below zero the boson star admits light rings. This corresponds to the configurations with low values of $\\omega$ and it can happen for all values of $k>0$. The case $k=0$ is not shown in Fig. \\ref{f:min} because the minimum is far above zero. The spherically symmetric boson stars considered in this paper have no light ring. This may be linked to the fact that they cannot reach a high enough compactness. On the other hand, the authors of \u00a0Ref. \\cite{CunhaHRR15} mentioned the existence of such orbits, even for $k=0$ boson stars. However, this is only possible for objects that are farther along the sequence than the configurations explored here (see Fig. 1 of Ref. \\cite{CunhaHRR15}).\n\n\\begin{figure}[!hbtp]\n\\includegraphics[width=12cm]{min.pdf}\n\\caption{\\label{f:min} Minimum of the function $I\\l(r\\r)$, when $\\epsilon=-1$. The configurations for which the minimum is below zero admit light rings. }\n\\end{figure}\n\nThe shape of the function $I\\l(r\\r)$ depicted in Fig. \\ref{f:func} is very generic : when the minimum of $I\\l(r\\r)$ is negative there are two light rings, corresponding to two radii $R_{\\rm minus}$ and $R_{\\rm plus}$. Moreover, it is easy to see that $\\partial_r V_{\\rm eff}$ and $I\\l(r\\r)$ have opposite signs. It follows that $R_{\\rm minus}$ corresponds to a minimum of $V_{\\rm eff}$ and is stable whereas $R_{\\rm plus}$ corresponds to a maximum of $V_{\\rm eff}$ and so to an unstable orbit. The radii of the light rings are shown in Fig. \\ref{f:radlight}.\n\n\\begin{figure}[!hbtp]\n\\includegraphics[width=12cm]{rminus.pdf}\n\\includegraphics[width=12cm]{rplus.pdf}\n\\caption{\\label{f:radlight} Radii of the inner light rings (first panel) and the outer ones (second panel). }\n\\end{figure}\n\nBy inserting Eq. (\\ref{e:EsL}) into Eqs. (\\ref{e:Ut}) and (\\ref{e:Up}) one can find the value of $\\displaystyle\\frac{{\\rm d}\\varphi}{{\\rm d}t}$ for the light rings. It leads to\n\\begin{equation}\n\\label{e:dpdt}\n\\frac{{\\rm d}\\varphi}{{\\rm d}t} = -\\frac{N}{Br} - \\beta^\\varphi.\n\\end{equation}\nThe right-hand side of Eq. (\\ref{e:dpdt}) must be evaluated at $R_{\\rm minus}$ or $R_{\\rm plus}$ depending on the light ring considered. The tangent vector of the trajectories is then given by $U^\\mu = \\l(1, 0, 0, \\displaystyle\\frac{{\\rm d}\\varphi}{{\\rm d}t}\\r)$ where the parameter along the trajectory has been chosen to be the coordinate $t$. Orbital frequencies of the inner (respectively outer) light rings are shown in Fig. \\ref{f:dpdt}. They are discussed further in Sec. \\ref{s:points}.\n\n\\begin{figure}[!hbtp]\n\\includegraphics[width=12cm]{dpdt_minus.pdf}\n\\includegraphics[width=12cm]{dpdt_plus.pdf}\n\\caption{\\label{f:dpdt} Orbital frequency $\\displaystyle\\frac{{\\rm d}\\varphi}{{\\rm d}t}$ of the inner light rings (first panel) and the outer ones (second panel). }\n\\end{figure}\n\n\n\\section{Light points}\\label{s:points}\n\nThe most striking result from Fig. \\ref{f:dpdt} is the fact that the orbital frequency of the inner light ring can change sign. This implies that it can even vanish, for certain boson stars. The associated trajectories are simply given by $U^\\mu = \\l(1, 0, 0, 0\\r)$. In term of spatial coordinates $\\l(r,\\theta,\\varphi\\r)$, those photons are not moving. They correspond more to light points than light rings. This effect is more than just a coordinate effect. Indeed those photons can be said to be at rest in the sense that their worldline is collinear to the Killing vector that corresponds to the asymptotic time translation symmetry. This notion of ``at rest'' is not new and is indeed identical to the one used when defining what an ergoregion is.\n\n\nEven if those light points seem to be peculiar, a detailed study can confirm their existence and location. One first condition is that the vector $U^\\mu = \\l(1, 0, 0, 0\\r)$ must be a null vector. Given the fact that the quasi-isotropic metric is diagonal this condition is $g_{tt}=0$. It is the same as the one defining the boundary of an ergoregion (see Sec. IV-E of Ref. \\cite{GrandSG14} for a detailed study of ergoregions of boson stars). So light points must be on the boundary of an ergoregion.\n\nHowever, not all the null curves are geodesics. The geodesic equation is $\\displaystyle\\frac{{\\rm d}U^\\mu}{{\\rm d}t} + \\Gamma^\\mu_{\\alpha\\beta} U^\\alpha U^\\beta = C\\l(t\\r) U^\\mu$ where $C\\l(t\\r)$ is a function along the curve. Note that, {\\em a priori}, one cannot make $C\\l(t\\r)=0$ because there is no guarantee that $t$ is an affine parameter. $\\Gamma$ denotes the four-dimensional Christoffel symbols. Inserting $U^\\mu = \\l(1, 0, 0, 0\\r)$ in the geodesic equation leads to the condition\n\n\\begin{equation}\n\\label{e:geo}\n\\Gamma^\\mu_{tt} = \\frac{1}{2} g^{\\mu \\alpha} \\l[-\\partial_\\alpha g_{tt}\\r] = C\\l(t\\r) U^\\mu,\n\\end{equation}\nwhere all the time derivatives have been set to zero. Moreover, $\\partial_t g_{tt}$ and $\\partial_\\varphi g_{tt}$ vanish due to the stationarity and axisymmetry of the problem. $\\partial_\\theta g_{tt}$ is also zero because the considered orbits are in the equatorial plane which is a surface of symmetry. Equation (\\ref{e:geo}) then reduces to $g^{\\mu r} \\l[-\\partial_r g_{tt}\\r] = 2C\\l(t\\r) U^\\mu$. Given the form of the metric (i.e. the use of quasi-isotropic coordinates), one can show that $g^{tr}$, $g^{\\theta r}$ and $g^{\\varphi r}$ all vanish. It first implies that one must have $C\\l(t\\r)=0$ which means that $t$ is indeed an affine parameter. Only the radial component of the geodesic equation is then not trivially satisfied and it reduces to \n\\begin{equation}\n\\partial_r g_{tt}=0.\n\\end{equation}\n\nThis condition on the derivative, along with the fact that $g_{tt}$ must vanish at the light points implies that they can only be situated exactly where an ergoregion starts to form. This is illustrated in Fig. \\ref{f:ergo}, in which the quantity $-g_{tt}$ is plotted, in the plane $z=0$, for three different boson stars. The curve $\\omega=0.8$ never goes to zero ; hence the associated boson star has no ergoregion. The curve $\\omega=0.6$ gets below zero and vanishes at two different radii. They are the inner and outer radii of the ergoregion (for boson stars ergoregions have the shape of a torus ; see Sec. IV-E of Ref. \\cite{GrandSG14}). The curve $\\omega\\approx 0.638$ corresponds to the case in which an ergoregion appears. $g_{tt}$ vanishes at just one point, which is also the minimum of the curve. This point is such that $g_{tt}=0$ and $\\partial_r g_{tt}=0$ and so corresponds to a light point.\n\n\\begin{figure}[!hbtp]\n\\includegraphics[width=12cm]{ergo.pdf}\n\\caption{\\label{f:ergo} $-g_{tt}$ for three boson stars with $k=2$. The upper curve has no ergoregion. The lower one has a standard ergoregion in the shape of a torus. The middle curve corresponds to the critic case in which an ergoregion just starts to appear.}\n\\end{figure}\n\n\\begin{figure}[!hbtp]\n\\includegraphics[width=12cm]{rad_points.pdf}\n\\caption{\\label{f:rad_points} For boson stars with $k=2$, inner and outer radii of the ergoregion (black curves). The red and blue curves denote the radius of the inner light ring, with different signs in the orbital frequency. All the curves intersect at the location of the circle which is the light point.}\n\\end{figure}\n\nFigure \\ref{f:rad_points} gives another illustration of the location of the light points (for the sequence $k=2$). The solid black curve denotes the inner radius of the ergoregion and the dashed black one denotes the outer one. Those two curves join at $\\omega\\approx 0.638$ which is the frequency at which an ergoregion starts to develop. The blue curve shows the radius of the inner light rings for which ${\\rm d}\\varphi\/{\\rm d}t <0$ whereas the red curve denotes the same radius but for configurations with ${\\rm d}\\varphi\/{\\rm d}t >0$. It is clear from Fig. \\ref{f:rad_points} that the light point lies at the intersection of the various curves, being the point where the ergoregion starts to develop. This is not so surprising. Ergoregions are defined as being locations where nothing can remain at rest, due to the frame-dragging effect generated by rotation. When it just appears only massless particles moving at the speed of light can overcome this effect and remain at rest. In other words, at the onset of the ergoregion, the frame-dragging effect is just compensated by the velocity of light. Each sequence of boson stars with $k>0$ admits a single light point. The values of $\\omega$ and $R_{\\rm minus}$ are summarized in Table \\ref{t:points}.\n\nIt is worth discussing the existence of light points in the case of black holes. First, they cannot exist around rotating Kerr black holes. Indeed, as previously seen, light points exist only when the ergoregion is limited to a single ring. In the case of a Kerr black hole, it has a nonzero volume as soon as $a=0$, so as soon as one deviates from the Schwarzschild case. When $a=0$, photons at rest do indeed exist but they are on orbits exactly on the horizon (they are the null generators of the event horizon). However, and this is the main difference from the boson star case, they correspond to unstable orbits. Stable light points are therefore very specific to boson star spacetimes.\n\n\\begin{table}\n\\caption[]{\\label{t:points}\nParameters of the light points for $k=1$ to $k=4$.\n} \n\\begin{tabular}{| c | c | c | }\n \\hline\n $k$ & $\\omega$ & $R_{\\rm minus}$ \\\\\n\\hline\n$1$ & $0.6582$ & $0.779$ \\\\\n$2$ & $0.6387$ & $1.99$ \\\\\n$3$ & $0.6383$ & $3.36$ \\\\\n$4$ & $0.6401$ & $4.83$ \\\\\n\\hline\n \\end{tabular}\n\\end{table}\n\nThe results can be checked by integrating numerically the geodesic equation. This is done using the tool Gyoto \\cite{VincePGP11}. One can compute the radius of a light ring, put a photon there, with the right angular frequency [i.e. given by Eq. (\\ref{e:dpdt})] ; and check whether the computed orbit is indeed circular. Figure \\ref{f:gyoto_orbits} shows such computations, in the case $k=2$. The first panel corresponds to the inner light ring ($R_{\\rm minus} = 3.11139$ and ${\\rm d}\\varphi\/{\\rm d}t = -0.0404863$) and the second panel corresponds the outer light ring ($R_{\\rm plus} = 7.07489$ and ${\\rm d}\\varphi\/{\\rm d}t = -0.0637682$), both for $\\omega=0.7$. The photon on the inner light ring remains nicely on a circular orbit. The one on the outer light ring starts by describing a circular orbit but is eventually kicked out. This is consistent with the fact that only the inner light ring is stable, corresponding to a minimum of the effective potential. The last panel shows the light point of the $k=2$ sequence (i.e. $\\omega=0.6387$ and $R=1.99$). As expected, the photon stays at the same spatial location and so appears as a single point in Fig. \\ref{f:gyoto_orbits}.\n\n\\begin{figure}[!hbtp]\n\\includegraphics[width=8cm]{inner_orbit.pdf}\n\\includegraphics[width=8cm]{outer_orbit.pdf}\n\\includegraphics[width=8cm]{point_orbit.pdf}\n\\caption{\\label{f:gyoto_orbits} Orbits integrated numerically. The first panel corresponds to a stable inner light ring whereas the second one depicts the orbit of an unstable outer one (for $k=2$ and $\\omega=0.7$). The last panel shows the light point of the $k=2$ sequence.}\n\\end{figure}\n\n\\section{Conclusion}\\label{s:ccl}\n\nIn this paper, models of rotating boson stars are studied. They were obtained numerically in a previous work \\cite{GrandSG14}. It is confirmed that rotating boson stars are objects that can reach very high compactness. This implies that boson stars can exhibit effects in which strong gravity is crucial such as the existence of closed orbits of photons. The existence of those orbits, called light rings, is confirmed for a large class of boson stars. Light rings could have strong implications on the stability of the boson stars themselves \\cite{CardoCMOP14}.\n\nA new class of light rings, which corresponds to stable trajectories in which the photons are at rest, with respect to an observer at infinity, is exhibited. It is proposed to call those orbits light points. Their existence is closely linked to the appearance of an ergoregion. They are very specific of boson stars in the sense that they cannot exist around black holes. Indeed in that case they are located exactly on the horizon and correspond to unstable trajectories. It is believed that this is the first time that those stable light points are discussed. Their existence has been confirmed by a direct integration of the geodesic equation.\n\n\\acknowledgments{The author thanks V. Cardoso and E. Gourgoulhon for convincing him to study light rings around boson stars.}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzfdww b/data_all_eng_slimpj/shuffled/split2/finalzzfdww new file mode 100644 index 0000000000000000000000000000000000000000..eeda49105aca03b67a34243782b6a9c91caad06e --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzfdww @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nEntanglement and its generation, besides its intrinsic interest as an unique quantum correlation and resource for quantum information processing \\cite{HHHH2009, NieChu2010}, continues to be vigorously investigated due to its relevance to a wide range of questions such as\nthermalization and the foundations of statistical physics \\cite{Popescu06,JC05,DeutchLiSharma2013,Kaufman16,Rigol2016}, decoherence \\cite{Zurek91,Haroche1998,Zurek03,Davidovich_2016}, delocalization \\cite{WSSL2008,kim13} and quantum chaos in few and many-body systems \\cite{Miller99, Bandyopadhyay02, Wang2004,LS05,petitjean2006lyapunov, tmd08, Amico08,Chaudhary}. Some of these issues\nconcern the rate of entanglement production, the nature of multipartite entanglement sharing and distribution, and long-time saturation or indefinite growth. Details of entanglement production in integrable {\\it versus} nonintegrable systems is an\nactive area of study, and it is generally appreciated that concomitant with the production of\nnear random states of nonintegrable systems is the production of large entanglement that can lead to thermalization\nof subsystems.\n\nEntanglement produced from suddenly joining two spin chains each in its ground state produces entanglement growth $\\sim \\ln(t\/a)$ at long times~\\cite{Calabrese_2007}, whereas quenches starting from arbitrary states can produce large entanglement including a linear growth phase \\cite{JC05}. Similarly in an ergodic or eigenstate thermalized phase a system can show ballistic entanglement growth \\cite{kim13,Ho17}, whereas in the many-body localized phase\nit is known to have a logarithmic growth in time. The entanglement in almost all these studies relates to bipartite block entanglement between\nmacroscopically large subsystems in many-body systems.\n\nEven earlier works have explored the entanglement in Floquet or periodically forced systems such as the coupled kicked tops or standard maps, as a means to study the relationship between chaos and entanglement \\cite{FNP1998,Miller99,Lakshminarayan2001, Bandyopadhyay02,Fujisaki03,Bandyopadhyay04}.\nIt was seen that chaos in general increases bipartite entanglement and results in near\nmaximal entanglement as the states become typical, in the sense of Haar or uniform measure.\nThe initial states considered were mostly phase-space localized coherent states. In the case when the uncoupled systems are chaotic, and the interactions are weak, after a short Ehrenfest time scale\nthe growth of the entropy or entanglement is essentially as if the coherent states were initially random subsystem states. In cases in which the subsystems are fully chaotic,\nthe growth of entanglement (beyond the Ehrenfest time) is dependent on the coupling strength rather than on measures of chaos such as the Kolmogorov-Sinai entropy or the Lyapunov exponent \\cite{Fujisaki03,Bandyopadhyay04} . A linear growth was observed in this case and perturbation theory \\cite{Fujisaki03} is successful in describing it, and the more extended time behavior was described by perturbation theory along with random matrix theory (RMT) \\cite{Bandyopadhyay04}. The linear growth leads to saturation values that are interaction independent, and in cases of moderately large coupling this is just the bipartite entanglement of typical or random states in the full Hilbert space.\n\nIn sharp contrast is the case for which the initial states are eigenstates of the non-interacting fully chaotic systems. This presents a very different scenario for weak couplings that to our knowledge has not been previously studied. The present paper develops a full theory in this case, starting\nfrom a properly regularized perturbation theory wherein a universal growth curve involving a suitably scaled time is derived. Importantly, this is further developed into a theory valid for non-perturbative strong couplings. We study these cases as an ensemble average over all uncoupled eigenstates, which clearly forms a special set of states in the Hilbert space for weak coupling.\n\n The entanglement production starts off linearly, as in the case of generic states, before saturating at much smaller entanglement values that are manifestly and strongly interaction dependent and reflect a ``memory\" of the initial ensemble to which it belongs. Interestingly while the linear regime is independent of whether the system possesses time-reversal symmetry or not, the subsequent behavior including the saturation value of the entanglement is larger for the case when time-reversal is broken. For generic initial states time-reversal symmetry has not played a significant\n role in the entanglement production or saturation.\nAs the interaction is increased this saturation approaches that of random or typical values and then the memory of starting off as a special initial state no longer persists. An essential aspect of this study is to elucidate at what interaction strength such a transition happens.\n\nThe interaction is properly measured by a scaled dimensionless transition parameter $\\Lambda$ that also determines transitions in the spectral statistics and eigenfunction entanglements of such systems \\cite{Srivastava16,Lakshminarayan16,Tomsovic18c}. If $\\Lambda=0$, the noninteracting case, although the two subsystems are chaotic, the quantum spectrum of the system has a Poisson level spacing statistic \\cite{Tkocz12} and therefore has many nearly degenerate levels which start to mix when the subsystems are weakly coupled. If $\\Lambda \\ll 1$, we are in a perturbative regime wherein the eigenstates with appreciable entanglements have a Schmidt rank of approximately two, i.e.\\ the reduced density matrix of the eigenstates has at most two principal nonzero eigenvalues. This circumstance carries over to time evolving states which are initially product eigenstates. Universal features of the eigenstate entanglement depend only on the single scaled parameter $\\Lambda$. For example, the linear entropy of the eigenstates $\\sim \\sqrt{\\Lambda}$ \\cite{Lakshminarayan16,Tomsovic18c}. On the other hand, as shown in this paper, time evolving states develop a linear entropy $\\sim C(2,t)\\sqrt{\\Lambda}$, with $C(2,t)$ being a {\\it universal} function for a properly scaled time, independent of the details of the interaction or the chaotic subsystem dynamics, except for a slight dependence on whether the system is time-reversal symmetric or not.\n\n\nSuch a universality follows from the existence of underlying RMT models that\ndescribe the transition from uncoupled to strongly coupled systems, a RMT transition ensemble. Although it is standard to\napply RMT for stationary states and spectral statistics \\cite{Bohigas84, Brody81, Haake10}, as indeed done for strongly chaotic and weakly\ninteracting systems \\cite{Srivastava16,Lakshminarayan16,Tomsovic18c}, it is noteworthy that this is typically not valid for the time evolution, because RMT lacks correlations required for describing short time dynamics properly. However the time scales\nover which the entanglement develops is much longer than the Ehrenfest time after which specific dynamical system features disappear. Thus, universal behaviors can be derived from such RMT transition ensembles provided the time scales of interest remain much longer than the Ehrenfest time scale. It turns out that for $\\Lambda \\gtrsim 1$, the interaction is strong enough that the\nsystem has fluctuations that are typical of RMT of over the whole space, for example the consecutive neighbor spacing of eigenvalues is that of Wigner \\cite{Srivastava16}. This also signals the regime for which eigenstates have typical entanglement of random states \\cite{Lakshminarayan16,Tomsovic18c} and as shown here, the time-evolving states lose memory of whether they initially belonged to special ensembles such as the noninteracting eigenstates.\n\nAlthough regularized perturbation theory, initially developed for studying symmetry breaking in\nstatistical nuclear physics~\\cite{French88a,Tomsovicthesis}, is used in the $\\Lambda \\ll 1$ regime, a novel recursive use of the perturbation theory allows for approximate, but very good extensions to the non-perturbative regime. In fact, it covers well the full transition. This provides an impressive connection of the entanglement both as a function of time and as a function of the interaction strength to the RMT regime where nearly maximal entanglement is obtained and formulas such as Lubkin's for the linear entropy \\cite{Lubkin78} and Page's for the von Neumann entropy \\cite{Page93,Sen1996} are obtained.\nWe illustrate the general theory by specifically considering both time-reversal symmetric and violating RMT transition\nensembles, given respectively by subsystem Floquet operators chosen from the circular orthogonal ensemble (COE)\nand the circular unitary ensemble (CUE) respectively. These are classic RMT ensembles consisting of unitary\nmatrices that are uniformly chosen with densities that are invariant under orthogonal (COE) and unitary (CUE) groups \\cite{MehtaBook}.\n\nIn addition, we apply it to a dynamical system of coupled standard maps \\cite{Froeschle71,Lakshminarayan2001,Richter14}. The standard map is a textbook example of a chaotic Hamiltonian system and is simply a periodically kicked pendulum. There is a natural translation symmetry in\nthe angular momentum that makes it possible to consider the classical map on a torus phase space, with periodic boundary\nconditions in both position and momentum. This yields convenient finite dimensional models of quantum chaos, the dimension\nof the Hilbert space being the inverse scaled Planck constant. The model we consider is that of coupling two\nsuch maps which has proven useful in previous studies relating to entanglement \\cite{Lakshminarayan2001,Lakshminarayan16,Tomsovic18c}, spectral transitions \\cite{Srivastava16} and out-of-time-ordered correlators \\cite{RaviLak2019}.\n\nTwo possible concrete examples are\na pair of particles in a chaotic quantum dot with tunable interactions or two spin chains that are in the ergodic phase before being suddenly joined.\nRecent experiments have accessed information of time-evolving states of interacting few-body systems via\nstate tomography of single or few particles, facilitating the study of the role of entanglement in the\napproach to thermalization of closed systems.\nSpecifically, experiments that have\nstudied nonintegrable systems include, for example, few qubit kicked top implementations \\cite{Chaudhary,Neill2016} and the Bose-Hubbard Hamiltonian \\cite{Kaufman16}. Thus modifications of these to accommodate weakly interacting parts are conceivable. This work adds to the already voluminous contemporary research on thermalization in closed systems\nby looking in detail at the time evolution for a case when thermalization, in the sense of a typical subsystem entropy, is unlikely to occur \\cite{GWEG2018}, namely starting from product eigenstates and quenching the interactions by suddenly turning them on.\n\n\nThis paper is organized as follows:\nIn Sec.~\\ref{sec:background}\nthe necessary background material on entanglement in bipartite systems,\nthe random matrix transition ensemble and the unversal\ntransition parameter are given.\nSec.~\\ref{sec:universal-entanglment-dynamics-perturbation-regime}\nprovides the perturbation theory for the\nuniversal entanglement based on the eigenvalues of the reduced\ndensity matrix, ensemble averaging for the CUE or COE\nand invoking a regularization.\nBased on this the eigenvalue moments of the reduced density matrix\nare obtained in Sec.~\\ref{sec:eigenvalue-moments}\nleading to explicit expressions for the HCT entropies.\nIn particular it is shown that for small interaction\nbetween the subsystems the simultaneous\nre-scaling of time and of the entropies by their saturation\nvalues leads to a universal curve which is independent\nof the interaction.\nThe extension to the non-perturbative regime\nis done in Sec.~\\ref{sec:NonPerturbative}\nby using a recursively embedded perturbation theory\nto produce the full transition and the saturation values.\nA comparison with a dynamical system given by\na pair of coupled kicked rotors is done\nin Sec.~\\ref{sec:coupled-kicked-rotors}.\nFinally, a summary and outlook is given in\nSec.~\\ref{sec:summary-and-outlook}.\n\n\n\n\\section{Background}\n\\label{sec:background}\n\n\\subsection{Entanglement in bipartite systems}\n\nConsider pure states, $|\\psi\\rangle$, of a bipartite system whose Hilbert space is a tensor product space, $\\mathcal{H}^A \\otimes \\mathcal{H}^B$, with subsystem dimensionalities, $N_A$ and $N_B$, respectively. Without loss of generality, let $N_A \\leq N_B$. The question to be studied is how much an initially unentangled state becomes entangled under evolution of some dynamics as a function of time.\n\nThe dynamics of a generic conservative system\ncould be governed by a Hamiltonian or by a unitary Floquet operator.\nSpecifically, a bipartite Hamiltonian system is of the form\n\\begin{equation}\nH(\\epsilon) = H_A \\otimes \\mathds{1}_B + \\mathds{1}_A \\otimes H_B + \\epsilon V_{AB} \\ , \\label{eq:GenericHamiltonian}\n\\end{equation}\nwhere the non-interacting limit is $\\epsilon = 0$. In the case of a quantum map, the dynamics can be described by a unitary Floquet operator~\\cite{Srivastava16}\n\\begin{equation}\n\\mathcal{U}(\\epsilon) = (U_A \\otimes U_B) U_{AB}(\\epsilon)\\ ,\n\\label{eq:GenericFloquet}\n\\end{equation}\nfor which the non-interacting limit is $U_{AB}(\\epsilon\\rightarrow 0) = \\mathds{1}$. We assume that both $V_{AB}$ and $U_{AB}(\\epsilon \\neq 0)$ are entangling interaction operators~\\cite{Lakshminarayan16}.\n\nThe Schmidt decomposition of a pure state is given by\n\\begin{equation}\n \\ket{\\psi} = \\sum_{l=1}^{N_A} \\sqrt{\\lambda_l} \\, \\ket{\\phi^A_l}\\ket{\\phi^B_l}.\n \\label{eq:GenericSchmidtDecomposition}\n\\end{equation}\nThe normalization condition on the state $\\ket{\\psi}$ gives\n\\begin{equation} \\label{eq:lambda-l-normalization}\n \\sum_{l=1}^{N_A} \\lambda_l = 1 .\n\\end{equation}\nThe state is unentangled if and only if the largest eigenvalue $\\lambda_1 = 1$ (all others vanishing), and maximally entangled if $\\lambda_l = 1\/N_A$ for all $l$. By partial traces, it follows that the reduced density matrices\n\\begin{equation}\n\\rho^A = \\tr_B(\\ket{\\psi}\\bra{\\psi}), \\qquad \\rho^B = \\tr_A(\\ket{\\psi}\\bra{\\psi})\\ ,\n\\end{equation}\nhave the property\n\\begin{equation}\n\\rho^A \\ket{\\phi^A_l} = \\lambda_l \\ket{\\phi^A_l}, \\quad \\text{and} \\quad \\rho^B \\ket{\\phi^B_l} = \\lambda_l \\ket{\\phi^B_l},\n\\end{equation}\n respectively. They are positive semi-definite, share the same non-vanishing (Schmidt) eigenvalues $\\lambda_l$ and $\\{ \\ket{\\phi^A_l} \\}$, and $\\{ \\ket{\\phi^B_l} \\}$ form orthonormal basis sets in the respective Hilbert spaces. For subsystem $B$ there are $N_B-N_A$ additional vanishing eigenvalues and associated eigenvectors.\n\nA very useful class of entanglement measures are given by the von Neumann entropy and Havrda-Charv\\'at-Tsallis (HCT) entropies~\\cite{Bennett96,Havrda67,Tsallis88,Bengtsson06}. The von Neumann entropy is given by\n\\begin{equation} \\label{eq:GenericVonNeumannEntropy}\n\\begin{split}\nS_1 & = -\\tr_A(\\rho^A \\ln \\rho^A ) = -\\tr_B(\\rho^B \\ln \\rho^B )\\\\\n &= -\\sum_{l=1}^{N_A} \\lambda_l \\ln \\lambda_l,\n\\end{split}\n\\end{equation}\nwhich vanishes if the state is unentangled and is maximized if all the nonvanishing eigenvalues are equal to $1\/N_A$. The HCT entropies are obtained from moments of the Schmidt eigenvalues. Defining\n\\begin{equation}\n\\mu_\\alpha = \\tr_A[(\\rho^A)^\\alpha] = \\tr_B[(\\rho^B)^\\alpha] = \\sum_{l=1}^{N_A} \\lambda_l^\\alpha, \\quad \\alpha >0 \\ ,\n\\label{eq:GenericMoments}\n\\end{equation}\ngives the HCT entropies as\n\\begin{equation} \\label{eq:GenericHCTEntropy}\n S_\\alpha = \\frac{1-\\mu_\\alpha}{\\alpha-1}.\n\\end{equation}\nNote that these differ from the R\\'enyi entropies \\cite{Ren1961wcrossref},\nwhich are defined by\n\\begin{equation*}\n R_\\alpha = \\dfrac{\\ln \\mu_{\\alpha}}{1-\\alpha} .\n\\end{equation*}\nIn the limit $\\alpha \\to 1$ also $R_\\alpha$ turns into the\nvon Neumann entropy.\nIn this work we use the HCT entropies as performing ensemble averages\nis easier using $\\mu_{\\alpha}$ than $\\ln \\mu_{\\alpha}$.\n\n\n\n\\subsection{Quantum chaos, random matrix theory, and universality}\n\nMany statistical properties of strongly chaotic quantum systems are successfully modeled and derived with the use of RMT~\\cite{Brody81,Bohigas84}. Generally speaking, the resulting properties are universal, and in particular, do not depend on any of the physical details of the system with the exception of symmetries that it respects. Here the subsystems are assumed individually to be strongly chaotic. Thus, the statistical properties of the dynamics, Eq.~(\\ref{eq:GenericFloquet}), can be modeled with the operators $U_A$ and $U_B$ being one of the standard circular RMT ensembles~\\cite{Dyson62e}, orthogonal, unitary, or symplectic, depending on the fundamental symmetries of the system~\\cite{Porterbook}.\n\nWe concentrate on the orthogonal (COE) and unitary ensembles (CUE) depending on whether or not time reversal invariance is preserved, respectively.\nThe derivation of the typical entanglement production for some initial state relies on the dynamics governed by\nthe random matrix transition ensemble \\cite{Srivastava16,Lakshminarayan16}\n\\begin{equation}\n\\mathcal{U}_{\\text{RMT}}(\\epsilon) = (U_A^\\text{RMT} \\otimes U_B^\\text{RMT}) U_{AB}(\\epsilon)\\ .\n\\label{eq:GenericFloquetRMT}\n\\end{equation}\nThe operator $U_{AB}(\\epsilon)$ is assumed to be diagonal in the direct product basis of the two subsystem ensembles. Explicitly, the diagonal elements are considered to be of the form $\\exp(2 \\pi i \\epsilon \\xi_{kl})$, where $\\xi_{kl}$ ($1\\le k,l\\le N_A,N_B$) is a random number uniformly distributed in $(-1\/2,1\/2]$.\n\n\n\\subsection{Symmetry breaking and the transition parameter}\n\nThe statistical properties of weakly interacting quantum chaotic bipartite systems have been studied recently, with the focus on spectral statistics, eigenstate entanglement, and measures of localization~\\cite{Srivastava16,Lakshminarayan16,Tomsovic18c}. If the subsystems are not interacting, the spectrum of the full system is just the convolution of the two subsystem spectra giving an uncorrelated spectrum in the large dimensionality limit. The eigenstates of the system are unentangled. It is very fruitful to conceptualize this as a dynamical symmetry. Upon introducing a weak interaction between the subsystems, this symmetry is weakly broken. As the interaction strength increases, the spectrum becomes increasingly correlated, and the eigenstates entangled.\n\nHere $U_{AB}(\\epsilon)$ plays the role of a dynamical symmetry breaking operator. For $\\epsilon=0$, the symmetry is preserved (the dynamics of the subsystems are completely independent), and as $\\epsilon$ gets larger, the more complete the symmetry is broken. It is known that for sufficiently chaotic systems, there is a universal scaling given by a transition parameter which governs the influence of the symmetry breaking on the system's statistical properties~\\cite{French88a}. The transition parameter is defined as~\\cite{Pandey83}\n\\begin{equation}\n\\Lambda = \\frac{v^2(\\epsilon)}{D^2}, \\label{eq:LambdaDef}\n\\end{equation}\nwhere $D$ is the mean level spacing and $v^2(\\epsilon)$ is the mean square matrix element in the eigenbasis of the symmetry preserving system, calculated locally in the spectrum.\n\nFor the COE and CUE the leading behavior in $N_A$ and $N_B$ is \\cite{Srivastava16,Tomsovic18c,HerKieFriBae2019:p}\n\\begin{align}\n\\label{eq:Lambda-RMT}\n& \\, \\Lambda = \\frac{N_A N_B}{4 \\pi^2} \\left[1-\\dfrac{\\sin^2 (\\pi \\epsilon)}{\\pi^2\\epsilon^2} \\right]\n\\sim \\frac{\\epsilon^2 N_AN_B}{12}\\ ,\n\\end{align}\nwhere the last result is in the limit of large $N_A$, $N_B$.\nThe transition parameter $\\Lambda$ ranges over $0 \\le \\Lambda \\le N_A N_B\/4\\pi^2\\ (N_A, N_B \\rightarrow \\infty)$, where the limiting cases are fully symmetry preserving, and fully broken, respectively.\nIn essence, the latter expression of Eq.~\\eqref{eq:Lambda-RMT} illustrates the fact that as the system size grows, a symmetry breaking transition has a discontinuously fast limit in $\\epsilon$.\n\nThe transition parameter gives the relation necessary to compare the statistical properties of systems of any size and kind to each other. As long as $\\Lambda$ has identical values, the systems have identical properties. However, for a particular dynamical system, it can turn out to be rather difficult to calculate $\\Lambda$. Although, the statistical properties are universal and independent of the nature of the system in this chaotic limit, properties such as whether the system is many-body or single particle, Fermionic or Bosonic, actually enter into its calculation. For example, a method for calculating $\\Lambda$ for highly excited heavy nuclei is given in Ref.~\\cite{French88b}. The far simpler case of coupled kicked rotors is given in Ref.~\\cite{Srivastava16}, and is used ahead for illustration. In extended systems, the issue of localization emerges, which must also be taken into account, and for them the term sufficiently chaotic is meant to exclude a localized regime.\n\n\\section{Universal entanglement production -- Perturbative regime}\n\\label{sec:universal-entanglment-dynamics-perturbation-regime}\n\nThe starting point of a derivation of the typical production rate of entanglement in initially unentangled states is the random matrix transition ensemble~\\eqref{eq:GenericFloquetRMT}. Following a similar derivation sequence for the eigenstates in Refs.~\\cite{Lakshminarayan16,Tomsovic18c}, the first step is to derive expressions for the eigenvalues of the reduced density matrix, which can be obtained from the Schmidt decomposition of the time evolved state of the system. Applying a standard Rayleigh-Schr{\\\"o}dinger perturbation theory leads to perturbation expressions for the Schmidt eigenvalues. However, due to the Poissonian fluctuations in the spectrum of the non-interacting system, near-degeneracies occur too frequently and cause divergences in the ensemble averages. It is therefore necessary first to regularize the eigenvalue expressions appropriately. It also turns out that the perturbation expressions for the HCT entanglement measures can be further extended to a non-perturbative regime by recursively invoking the regularized perturbation theory leading to a differential equation, which is analytically solvable~\\cite{Tomsovic18c},\nsee Sec.~\\ref{sec:NonPerturbative}.\n\n\n\\subsection{Definitions}\n\nThe eigenvalues and corresponding eigenstates of the unitary operators\n$U_A$ and $U_B$ for the subsystems\nand of $\\mathcal{U}(\\epsilon) = (U_A \\otimes U_B) U_{AB}(\\epsilon)$\nof the full bipartite system \\eqref{eq:GenericFloquet} are given\nby the equations\n\\begin{eqnarray}\nU_A \\ket{j^A} &=& \\text{e}^{i \\theta_j^A} \\ket{j^A},\\quad j=1,2,3,\\ldots,N_A \\nonumber \\\\\nU_B \\ket{k^B} &=& \\text{e}^{i \\theta_k^B} \\ket{k^B}, \\quad k=1,2,3,\\ldots,N_B\\\\\n\\mathcal{U}(\\epsilon) \\ket{\\Phi_{jk}} &=& \\text{e}^{i \\varphi_{jk}} \\ket{\\Phi_{jk}}\\ .\n\\nonumber\n\\end{eqnarray}\nTo simplify the notation, the superscripts $A$ and $B$ are dropped for both eigenkets, $\\ket{j^A}\\ket{k^B} \\equiv \\ket {jk}$, and the eigenvalues $\\theta_j^A \\equiv \\theta_j$ ($\\theta_k^B \\equiv \\theta_k$). It is understood that the labels $j$ and $k$ are reserved for the subsystems $A$ and $B$, respectively. Similarly for convenience, the subscript $AB$ is dropped from the operator $V_{AB}$.\n\nGiven the form \\eqref{eq:GenericFloquet} of the unitary operator $\\mathcal{U}(\\epsilon)$, in the limit $\\epsilon \\rightarrow 0$ one has $\\ket{\\Phi_{jk}} \\rightarrow \\ket{jk}$ which is a product eigenstate of the unperturbed system and forms a complete basis with spectrum $\\varphi_{jk} \\rightarrow \\theta_{jk} = \\theta_j + \\theta_k \\,\\,\\text{mod}\\,\\, 2\\pi$. For non-vanishing $\\epsilon$ there is a unitary transformation $S$ between the eigenbases for the set $\\ket{\\Phi_{jk}}$ and $\\ket{jk}$ whose matrix elements can be identified using the relations\n\\begin{eqnarray}\n\\ket{\\Phi_{jk}} &=& \\sum_{j'k'} S_{jk,j'k'} \\ket{j'k'} = \\sum_{j'k'} \\ket{j'k'} \\bra{j'k'}\\ket{\\Phi_{jk}} \\nonumber \\\\\n\\ket{jk} &=& \\sum_{j'k'} S^\\dagger_{jk,j'k'} \\ket{\\Phi_{j'k'}}\\ .\n\\label{eq:UnitaryTransform}\n\\end{eqnarray}\n\n\\subsection{Eigenvalues of the reduced density matrix}\n\nIn the limit $N_A \\rightarrow \\infty$, perturbation theory for unitary Floquet systems generates the same equations as for Hamiltonian systems up to vanishing corrections of $\\mathcal{O}((N_A N_B)^{-1})$ if one identifies $U_{AB}(\\epsilon)=\\exp(i\\epsilon V)$~\\cite{Tomsovic18c}. For an initial unentangled state, begin by considering an eigenstate $\\ket{jk}$ of the non-interacting system. Denote the time evolution of this initial state after $n$ iterations of the dynamics as $\\ket{j k(n;\\epsilon)}$ [$ = \\mathcal{U}^n(\\epsilon) \\ket{j k}$]. Upon the usual insertion of the completeness relation one gets\n\\begin{equation}\n\\ket{jk(n;\\epsilon)} = \\sum_{j'k'} \\text{e}^{i n \\varphi_{j'k'}} S^\\dagger_{jk,j'k'} \\ket{\\Phi_{j'k'}}. \\label{eq:GeneralTimeEvolvedState}\n\\end{equation}\nThis time evolved state has a standard Schmidt decomposed form\n\\begin{equation}\n\\ket{jk(n;\\epsilon)} = \\sum_{l = 1}^{N_A} \\sqrt{\\lambda_l(n;\\epsilon)} \\, \\ket{\\phi^A_l(n;\\epsilon)}\\ket{\\phi^B_l(n;\\epsilon)},\n\\label{eq:GeneralTimeEvolvedStateInSDForm}\n\\end{equation}\nwhere $\\lambda_1 \\geq \\lambda_2 \\geq \\ldots \\geq \\lambda_{N_A}$ are time-dependent Schmidt numbers (eigenvalues of the reduced density matrices) such that $\\sum_l \\lambda_l(n;\\epsilon) = 1$, and $\\{ \\ket{\\phi^A_l(n;\\epsilon)}\\}$, $\\{ \\ket{\\phi^B_l(n;\\epsilon)}\\}$ are the corresponding Schmidt eigenvectors of the $A$ and $B$ subspaces, respectively.\n\nIt was shown in Ref.~\\cite{Lakshminarayan16} that for weak perturbations, the Schmidt decomposition of the eigenstates to $\\mathcal{O}(N_A^{-1})$ corrections are given by the neighboring eigenstates of the unperturbed (non-interacting) system and the perturbation theory coefficients. This can be considered as a kind of automatic Schmidt decomposition.\n The generalization to the time evolving states, $\\ket{j k(n;\\epsilon)}$, follows by another insertion of the unitary transformation $S$ to give\n\\begin{equation}\n\\ket{jk(n;\\epsilon)} = \\sum_{j''k''} \\sum_{j'k'} \\text{e}^{i n \\varphi_{j'k'}} S^\\dagger_{jk,j'k'} S_{j'k',j''k''} \\ket{j''k''}. \\label{eq:GeneralTimeEvolvedState2}\n\\end{equation}\nThis leads to the identification\n\\begin{equation}\n\\lambda_l(n;\\epsilon) = \\left| \\sum_{j'k'} \\text{e}^{i n \\varphi_{j'k'}} S^\\dagger_{jk,j'k'} S_{j'k',(jk)_l} \\right|^2,\\label{eq:SchmidtNumbersPT}\n\\end{equation}\nwhere $j''k''\\rightarrow (jk)_l$, meaning that fixing $l$ fixes a unique and distinct index pair $(jk)_l$; e.g.~$(jk)_1=jk$. Only a small subset ($\\lesssim N_A$) of possible pairs $j''k''$ are related to a $(jk)_l$ due to the energy denominators in perturbation theory. This is a direct result of the automatic Schmidt decomposition.\nFor $\\epsilon = 0$ one has $\\lambda_1(n;0) =1$, and the rest of the Schmidt eigenvalues vanish\nby the normalization \\eqref{eq:lambda-l-normalization}, as the initial state\nis a product state of eigenstates of the two subsystems.\n\nTo prepare for ensemble averaging, it is helpful to: i) assume that the $jk$ pairs are ordered by the order of the eigenvalues $\\varphi_{jk}$, ii) use the properties of $S$ so that the $n=0$ results are immediately evident, and iii) separate out the diagonal matrix element $S_{jk,jk}$ as a special case. Let $\\Delta \\varphi_{jk,j^\\prime k^\\prime} = \\varphi_{jk} - \\varphi_{j^\\prime k^\\prime}$. For the largest eigenvalue, i.e.~Eq.~(\\ref{eq:SchmidtNumbersPT}) for $l = 1$, one finds\n\\begin{align}\n& \\lambda_1(n;\\epsilon) = 1 - 2 \\sum_{j'k', j''k''} \\left|S_{j'k',jk}\\right|^2 \\left|S_{j''k'',jk}\\right|^2 \\nonumber \\\\\n& \\qquad \\qquad \\qquad \\qquad \\qquad \\quad \\times \\sin^2 \\left(\\frac{n \\Delta \\varphi_{j'k',j''k''}}{2} \\right) \\nonumber \\\\\n& = 1 - 4 \\left|S_{jk,jk}\\right|^2\\sum_{j'k'} \\left|S_{j'k',jk}\\right|^2 \\sin^2 \\left(\\frac{n \\Delta \\varphi_{jk,j'k'}}{2} \\right) \\nonumber \\\\\n& - 4 \\sum_{\\substack{j'k' \\le j''k'' \\\\ \\ne jk}} \\left|S_{j'k',jk}\\right|^2 \\left|S_{j''k'',jk}\\right|^2 \\sin^2 \\left(\\frac{n \\Delta \\varphi_{j'k',j''k''}}{2} \\right). \\nonumber \\\\\n\\label{eq:LargestEigenvalueRaw}\n\\end{align}\nFor $l \\ge 2$, and thus $(jk)_l \\neq jk$, a similar manipulation gives\n\\begin{align}\n&\\lambda_l(n;\\epsilon) = - \\sum_{\\substack{j'k'\\\\j''k''}} \\Re\\left\\{S^\\dagger_{jk,j'k'} S_{j'k',(jk)_l} S^\\dagger_{(jk)_l,j''k''} S_{j''k'',jk} \\right\\} \\nonumber \\\\\n& \\qquad \\qquad \\qquad \\qquad \\times 2 \\sin^2 \\left(\\frac{n \\Delta \\varphi_{j'k',j''k''}}{2} \\right) \\nonumber \\\\\n& \\qquad \\qquad - \\sum_{\\substack{j'k'\\\\j''k''}} \\Im\\Big\\{S^\\dagger_{jk,j'k'} S_{j'k',(jk)_l} S^\\dagger_{(jk)_l,j''k''} S_{j''k'',jk} \\Big\\} \\nonumber \\\\\n& \\qquad \\qquad \\qquad \\qquad \\times \\sin \\left(n \\Delta \\varphi_{j'k',j''k''} \\right).\n\\label{eq:OtherEigenvaluesRaw}\n\\end{align}\nNote that summing Eq.~\\eqref{eq:OtherEigenvaluesRaw}\nover $l>1$ reproduces unity minus the expression of Eq.~\\eqref{eq:LargestEigenvalueRaw} as it must. These Schmidt eigenvalue expressions are exact to order $\\mathcal{O}(N_A^{-1})$.\n\nLowest order Rayleigh-Schr\\\"{o}dinger perturbation theory is applied to the matrix elements of $S$ in order to obtain the complete $O(\\epsilon^2)$ terms of the corresponding expressions for the Schmidt eigenvalues. Let $\\Delta \\theta_{jk,j'k'} = \\theta_{jk}-\\theta_{j'k'}$. The matrix elements $S_{jk,j'k'}$ are approximately\n\\begin{equation}\nS_{jk,j'k'} \\approx\n\\begin{cases}\n\\frac{1}{\\sqrt{\\mathcal{N}_{jk}}}, & jk = j'k' \\\\\n\\frac{1}{\\sqrt{\\mathcal{N}_{jk}}} \\frac{\\epsilon \\, V_{j'k',jk}}{\\Delta \\theta_{jk,j'k'}}, & jk \\neq j'k',\n\\end{cases} \\label{eq:UnitaryTransformPT}\n\\end{equation}\nwhere $\\mathcal{N}_{jk}$ is the normalization factor and the perturbed quasienergy is\n\\begin{align}\n& \\varphi_{jk} = \\theta_{jk} + \\epsilon^2 \\sum_{j'k' \\neq jk } \\frac{|V_{j'k',jk}|^2}{\\Delta \\theta_{jk,j'k'}}. \\label{eq:Unreg_quasienergy}\n\\end{align}\nNote first that in this derivation the diagonal matrix elements $V_{j'k',j'k'}$ are set to zero, because the energy shift due to the first order correction is a random number added to an uncorrelated spectrum giving another uncorrelated spectrum, and hence will not change the spectral statistics nor rotate the eigenvectors. Secondly, the normalization factor is included even though its first correction is $O(\\epsilon^2)$ because it plays a significant role in determining the regularized expressions ahead, likewise for the perturbed eigenvalues multiplied by the time in the argument of the sine function. For the largest eigenvalue Eq.~(\\ref{eq:LargestEigenvalueRaw}) becomes\n\\begin{align}\n&\\lambda_1(n;\\epsilon) \\approx 1 - \\frac{4}{\\mathcal{N}_{jk}} \\sum_{j'k' \\neq jk} \\bigg(\\frac{\\epsilon^2 \\, |V_{jk,j'k'}|^2}{\\mathcal{N}_{j'k'} \\, \\Delta \\theta_{j'k',jk}^2} \\bigg) \\nonumber \\\\\n& \\qquad \\qquad \\qquad \\qquad \\times \\sin^2 \\left(\\frac{n \\Delta \\varphi_{j'k',jk}}{2}\\right),\n\\label{eq:LargestEigenvaluePT}\n\\end{align}\nand for $l \\ge 2$ the others, Eq.~\\eqref{eq:OtherEigenvaluesRaw}, read\n\\begin{align}\n& \\lambda_l(n;\\epsilon) \\approx \\frac{4}{\\mathcal{N}_{jk}}\n \\frac{\\epsilon^2\\,|V_{(jk)_l,jk}|^2}{\\mathcal{N}_{(jk)_l}\\,\\Delta \\theta_{jk,(jk)_l}^2} \\sin^2 \\left(\\frac{n \\Delta \\varphi_{(jk)_l,jk}}{2}\\right),\n\\label{eq:OtherEigenvaluesPT}\n\\end{align}\nwhere $\\mathcal{N}_{jk}=\\mathcal{N}_{(jk)_l}=1 + \\mathcal{O}(\\epsilon^2)$.\n\n\\subsection{Ensemble averaging}\nBefore moving on to ensemble averaging, it is helpful to make some rescalings as follows:\n\\begin{align}\n& \\Delta \\theta_{j'k',jk} = D \\, s_{j'k'}\\\\\n& \\Delta \\varphi_{j'k',jk} = D \\, s_{j'k'}(\\epsilon) \\approx D s_{j'k'} \\bigg( 1 + \\frac{2 \\Lambda w_{j'k'}}{s^2_{j'k'}} \\bigg) \\label{eq:UnregSpacing}\\\\\n& \\epsilon^2 |V_{jk,j'k'}|^2 = \\Lambda D^2 \\, w_{j'k'}\\ ,\n\\end{align}\nwhere $D = 2\\pi\/(N_A N_B)$ is the mean level spacing of the full system, $s_{j'k'}=s_{j'k'}(0)$, and the $jk$ subscript is dropped where unnecessary. The approximation in Eq.~\\eqref{eq:UnregSpacing} follows by considering only the matrix element that directly connects the two levels. The other terms in Eq.~\\eqref{eq:UnregSpacing} move the levels back and forth and mostly cancel, but this term pushes the two levels away from each other and is dominant when the two levels are close lying where the correction may contribute.\nThe symmetry breaking (entangling) interaction matrix elements of $V$ represented in the eigenbasis of the unperturbed system behave as complex Gaussian random variable such that\n$ \\overline{ | V_{jk,j'k'} |^2 } = v^2 w_{j'k'} $, where $w_{j'k'}$ follow a Porter-Thomas distribution \\cite{PorTho1956} for the COE and an exponential one for the CUE:\n\\begin{equation}\n\\rho(w) =\n\\begin{cases}\n\\frac{1}{\\sqrt{2\\pi w}}\\exp(-w\/2) & \\qquad \\text{for COE} \\\\\n\\exp(-w) & \\qquad \\text{for CUE}.\n\\end{cases}\\label{PorterThomas}\n\\end{equation}\nIn both of the cases, $\\overline{w_{j'k'} }= 1$, which is consistent with $\\Lambda = \\epsilon^2 v^2 \/D^2$. In real dynamical systems, deviations from Porter-Thomas distributions may occur as noted in Ref.~\\cite{Tomsovic18c}.\n\nThus, in the rescaled variables the Schmidt eigenvalues for $l \\ge 2$ are\n\\begin{equation}\n\\lambda_l(n;\\Lambda) \\approx \\frac{4}{\\mathcal{N}_{jk}}\\Bigg(\\frac{\\Lambda \\,w_{(jk)_l} }{\\mathcal{N}_{(jk)_l}\\, s_{(jk)_l}^2} \\Bigg) \\,\\sin^2\\bigg(\\frac{n Ds_{(jk)_l}{(\\epsilon)}}{2}\\bigg), \\label{eq:OtherEigenvaluesRescaled}\n\\end{equation}\nand the relation, following from the normalization\ncondition Eq.~\\eqref{eq:lambda-l-normalization},\n\\begin{equation}\n\\lambda_1(n;\\Lambda) = 1 - \\sum_{l\\ne 1}\\lambda_l(n;\\Lambda)\n\\end{equation}\nis exactly preserved to this order. Next convert the expressions for the Schmidt eigenvalues into integrals, by making use of the function $R(s,w)$~\\cite{Tomsovic18c},\n\\begin{equation}\nR(s,w) = \\sum_{j'k' \\neq jk} \\delta(w-w_{j'k'}) \\delta(s-s_{j'k'}), \\label{eq:R-function}\n\\end{equation}\nwhich after ensemble averaging becomes the joint probability density of finding a level at a rescaled distance $s$ from $\\theta_{jk}$ and the corresponding scaled matrix element $w_{j'k'}$ at the value $w$.\nWith these definitions, scalings, and substitutions, Eq.~(\\ref{eq:LargestEigenvaluePT}) becomes\n\\begin{align}\n& \\lambda_1(n;\\epsilon) \\approx 1 - 4 \\Lambda \\int_{-\\infty}^\\infty \\dd{s} \\int_0^\\infty \\dd{w} \\frac{w}{s^2}\\,R(s,w) \\nonumber \\\\\n&\\qquad \\qquad \\qquad \\times \\sin^2\\Big(\\frac{n D s}{2}\\big[\\,1+\\frac{2 \\Lambda w}{s^2}\\,\\big]\\Big).\n\\end{align}\nThe ensemble average of $\\lambda_1(n;\\epsilon)$ follows by substituting the ensemble average of $R(s,w)$ by\n\\begin{equation}\n\\overline{R(s,w)} = R_2(s) \\, \\rho(w),\n\\end{equation}\nwhere $R_2(s)$ is the two-point correlation function and $\\rho(w)$ is defined in Eq.~(\\ref{PorterThomas}). For an uncorrelated spectrum $R_2(s)=1$ for $-\\infty < s < \\infty$.\nTherefore, the averaged largest Schmidt eigenvalue is\n\\begin{align}\n& \\overline{ \\lambda_1(n;\\Lambda)}\\approx 1 - 4 \\Lambda \\int_{-\\infty}^\\infty \\dd{s} \\int_0^\\infty \\dd{w} \\frac{w}{s^2}\\, \\rho(w) \\nonumber \\\\\n&\\qquad \\qquad \\qquad \\times \\sin^2\\Big(\\frac{n D s}{2}\\big[\\,1+\\frac{2 \\Lambda w}{s^2}\\,\\big]\\Big). \\label{eq:LargestEigenvalueUnreg}\n\\end{align}\nThis expression diverges due to the fact that too many spacings are vanishingly small across the ensemble, and the perturbation theory must account for spacings smaller than the matrix elements. In the next subsection the expressions are regularized properly for small $s$.\n\nIt is worth noting that if the interest is in the ensemble average of some function of $\\lambda_1(n;\\Lambda)$, then one must consider the ensemble average of the same function of $R(s,w)$. Perhaps, the simplest example is the ensemble average of the square of $\\lambda_1(n;\\Lambda)$ for which the needed result is~\\cite{Tomsovic18c}\n\\begin{eqnarray}\n\\overline{R(s_1,w_1)R(s_2,w_2)} &=& R_3(s_1,s_2) \\rho(w_1)\\rho(w_2) + \\nonumber \\\\\n&&\\hspace*{-2cm}\n \\delta\\left(w_1-w_2\\right)\\rho(w_1) \\delta\\left(s_1-s_2\\right)R_2(s_1)\n\\label{threepoint}\n\\end{eqnarray}\nwhich involves both the $2$-point and $3$-point spectral correlation functions. However, it turns out that the leading correction depends on $R_2(s)$, as the $R_3(s_1,s_2)$ term gives a contribution that is $\\mathcal{O}(\\sqrt{\\Lambda})$ smaller in comparison, and for example, generating the leading correction of high order moments depends only on the $2$-point spectral correlation function. This circumstance is helpful ahead in the next section.\n\nFollowing the same sequence of steps for the second largest eigenvalue $\\lambda_2$ requires, in addition, the probability density of the closest scaled energy of one of the $\\ket{(jk)_l}$. For uncorrelated spectra it is given by $\\rho_{\\text{CN}}(s) = 2 \\exp(-2 s)$ for $0 \\le s < \\infty$~\\cite{Tomsovic18c,Srivastava19}. One finds for the ensemble average of second largest eigenvalue,\n\\begin{align}\n& \\overline{\\lambda_2(n;\\Lambda)} \\approx 4 \\Lambda \\int_{-\\infty}^\\infty \\dd{s} \\int_0^\\infty \\dd{w} \\frac{w}{s^2} \\,\\rho(w) \\, \\rho_{\\text{CN}}(s)\\nonumber \\\\\n&\\qquad \\qquad \\qquad \\times \\sin^2\\Big(\\frac{n D s}{2}\\big[\\,1+\\frac{2 \\Lambda w}{s^2}\\,\\big]\\Big),\\label{eq:OtherEigenvalueUnreg}\n\\end{align}\nwhich is also divergent for small $s$. It turns out that the apparent order of corrections, $\\mathcal{O}(\\Lambda)$, seen in Eqs.~(\\ref{eq:LargestEigenvalueUnreg}, \\ref{eq:OtherEigenvalueUnreg}), is not correct. The regularization required to deal with the small energy denominators in the perturbation expressions alters the leading order to $\\mathcal{O}(\\sqrt{\\Lambda})$.\n\n\\subsection{Regularized perturbation theory}\n\nThe method for regularizing the perturbation expressions was introduced in Refs.~\\cite{Tomsovicthesis,French88a} and developed for the Schmidt eigenvalues pertaining to the eigenstates of interacting quantum chaotic systems in Refs.~\\cite{Lakshminarayan16,Tomsovic18c}. The standard Rayleigh-Schr\\\"{o}dinger perturbation expressions break down when the unperturbed spectrum has nearly degenerate levels due to the small energy denominators. However, there is an infinite sub-series of terms within the perturbation series of a quantity of interest which involve only two levels that are diverging due to near-degeneracy. This subseries can be resummed to get the corresponding regularized expressions. These results are equivalent to the two-dimensional degenerate perturbation theory results.\n\nThe regularized expressions for the Schmidt eigenvalues upon resummation of the two-level like terms of the perturbation series boil down to essentially replacing\n\\begin{align}\n\\frac{1}{\\mathcal{N}_{jk}}\\Bigg(\\frac{\\Lambda\\,w_{j'k'} }{\\mathcal{N}_{j'k'}\\, s_{j'k'}^2} \\Bigg) \\mapsto \\frac{\\Lambda \\, w_{j'k'}}{s_{j'k'}^2+ 4 \\, \\Lambda \\, w_{j'k'}} , \\label{eq:RegularizedNormTimesMatrixElement}\n\\end{align}\nalong with the energy spacing~\\cite{Srivastava16} in Eq.~(\\ref{eq:UnregSpacing}) as\n\\begin{equation}\ns_{j'k'}(\\epsilon) \\mapsto \\sqrt{s_{j'k'}^2 + 4 \\Lambda w_{j'k'} } \\label{eq:RegularizedSpacing}\n\\end{equation}\nin Eqs.~(\\ref{eq:LargestEigenvaluePT}, \\ref{eq:OtherEigenvaluesPT}). To verify the result in Eq.~(\\ref{eq:RegularizedNormTimesMatrixElement}), the Schmidt eigenvalues in Eqs.~(\\ref{eq:LargestEigenvalueRaw}, \\ref{eq:OtherEigenvaluesRaw}) were expanded using perturbation theory of the matrix elements $S_{jk,j'k'}$ up to and including order $\\mathcal{O}(\\epsilon^4)$. The details for this are given in App.~\\ref{app:RegularizationDerivation}. For a two-level system the normalization is\n\\begin{equation}\n|S_{jk,jk}|^2 = \\frac{1}{\\mathcal{N}_{jk}} = \\frac{1}{2}\\Bigg( 1 + \\frac{|s_{j'k'}|}{\\sqrt{s_{j'k'}^2+4 \\Lambda w_{j'k'}}} \\Bigg)\n\\end{equation}\nand the matrix element\n\\begin{equation}\n|S_{j'k',jk}|^2 = \\frac{1}{2}\\Bigg( 1 - \\frac{|s_{j'k'}|}{\\sqrt{s_{j'k'}^2+4 \\Lambda w_{j'k'}}} \\Bigg).\n\\end{equation}\nUsing these results, we get Eq.~(\\ref{eq:RegularizedNormTimesMatrixElement}). This gives the regularized Schmidt eigenvalues for $l \\neq 1$ as\n\\begin{align}\n& \\lambda_l(n;\\Lambda) = \\frac{4 \\, \\Lambda \\, w_{(jk)_l}}{s^2_{(jk)_l} + 4 \\, \\Lambda \\, w_{(jk)_l} } \\nonumber \\\\\n& \\qquad \\qquad \\qquad \\times \\sin^2 \\Big( \\frac{n D}{2} \\sqrt{s^2_{(jk)_l} + 4 \\, \\Lambda \\, w_{(jk)_l}} \\,\\Big). \\label{eq:SchmidtEigenvaluelneq1Reg}\n\\end{align}\nRescaling the spacing $z = s\/\\sqrt{\\Lambda}$ and time\n\\begin{equation} \\label{eq:time-rescaling}\n t = n D\\sqrt{\\Lambda} ,\n\\end{equation}\nthe ensemble average of the first two Schmidt eigenvalues is given by\n\\begin{align}\n& \\overline{ \\lambda_1(t;\\Lambda)} = 1- \\sqrt{\\Lambda}\\int_0^\\infty \\dd{w} \\int_{-\\infty}^\\infty \\dd{z} \\frac{4 w}{z^2 + 4 w} \\, \\rho(w) \\nonumber \\\\\n& \\qquad \\qquad \\qquad \\times \\sin^2\\Big(\\frac{t}{2} \\sqrt{z^2 + 4 w} \\Big) \\nonumber \\\\\n& \\qquad \\quad \\, = 1 - C_2(1;t)\\,\\sqrt{\\Lambda},\n\\end{align}\nwhere $C_2(1;t)$ is a short-hand for the integral\n(the general notation for arbitrary moments is given in\nEq.~(\\ref{eq:C2Integral}) ahead) and,\n\\begin{align}\n& \\overline{ \\lambda_2(t;\\Lambda)} = \\sqrt{\\Lambda}\\int_0^\\infty \\dd{w} \\int_0^\\infty \\dd{z} \\frac{4 w}{z^2 + 4 w} \\nonumber \\\\\n& \\qquad \\qquad \\qquad \\times \\, \\rho(w) \\,\\big(2 \\,\\text{e}^{-2 z \\sqrt{\\Lambda}}\\,\\big) \\sin^2\\Big(\\frac{t}{2} \\sqrt{z^2 + 4 w} \\Big) \\nonumber \\\\\n& \\, \\quad \\qquad = C_{2}(1;t) \\sqrt{\\Lambda} + \\mathcal{O}(\\Lambda \\ln \\Lambda).\n\\end{align}\nFor sufficiently small $\\Lambda$ it turns out that for both the COE and CUE cases, only (unperturbed) eigenstates corresponding to the first two largest Schmidt eigenvalues contribute largely to the state $\\ket{jk(t;\\Lambda)}$, i.e.,\n\\begin{equation}\n\\overline{ \\lambda_1(t;\\Lambda) + \\lambda_2(t;\\Lambda)} = 1 + \\mathcal{O}(\\Lambda \\ln \\Lambda ),\n\\end{equation}\nand other Schmidt eigenvalues ($l>2$) contribute in higher orders than $\\sqrt{\\Lambda}$.\nThis is crucial for extending the perturbation theory of the Schmidt eigenvalue moments to the non-perturbative regime, which is done in Sec.~\\ref{sec:NonPerturbative}.\nIt should be noted that as the unperturbed spectrum is uncorrelated, there is a non-zero probability of three-level, four-level and so-forth near-degeneracy occurrences, but with lower probability from the two-level case, and hence their contributions are higher of order than $\\sqrt{\\Lambda}$.\n\nMoreover note that the perturbation expressions for the Schmidt eigenvalues of the eigenstates $\\{\\ket{\\Phi_{jk}}\\}$ given in Refs.~\\cite{Lakshminarayan16, Tomsovic18c} for the largest eigenvalue and the other eigenvalues are $1 - \\sum_{j'k \\neq jk} \\epsilon^2 |V_{(j'k',jk}|^2\/\\Delta\\theta^2_{jk,j'k'}$ and $\\epsilon^2 |V_{(jk)_l,jk}|^2\/\\Delta\\theta^2_{jk,(jk)_l}$, respectively, in contrast to the expressions for the Schmidt eigenvalues of a time evolving state $\\ket{jk(n;\\epsilon)}$ presented in Eqs.~(\\ref{eq:LargestEigenvaluePT}, \\ref{eq:OtherEigenvaluesPT}). Due to an extra normalization factor in the denominators of Eqs.~(\\ref{eq:LargestEigenvaluePT}, \\ref{eq:OtherEigenvaluesPT}), the expression for the regularization, although related, takes on a different form than that in Refs.~\\cite{Lakshminarayan16, Tomsovic18c}.\n\n\n\\section{Eigenvalue moments of the reduced density matrix}\n\\label{sec:eigenvalue-moments}\n\nTo fully characterize the entanglement of the evolving state, the Schmidt eigenvalue expression in Eq.~(\\ref{eq:SchmidtEigenvaluelneq1Reg}) is used to compute the leading order of general moments analytically and thereby the HCT entropies, good up to and including $\\mathcal{O}(\\sqrt{\\Lambda})$.\n\n\\subsection{General moments} \\label{subsec:GeneralMomentsCalc}\n\nConsider the ensemble average of the general moments $\\mu_\\alpha$, Eq.~\\eqref{eq:GenericMoments}, of the Schmidt eigenvalues. The largest eigenvalue must be separated out from the others and two integrals considered. First, consider general moments of the sum of all the Schmidt eigenvalues other than the largest, i.e.\n\\begin{equation}\n\\overline{ \\sum_{l \\neq 1} \\lambda_l^\\alpha(t;\\Lambda) } = C_2(\\alpha;t) \\, \\sqrt{\\Lambda} \\,,\n\\end{equation}\nwhere after rescaling $s$ to $z$ in Eq.~(\\ref{eq:R-function})\n\\begin{align}\n& C_2(\\alpha;t) = \\int_{-\\infty}^\\infty \\dd{z} \\int_0^\\infty \\dd{w} \\overline{R(z,w)} \\frac{4^\\alpha w^\\alpha}{(z^2+4w)^\\alpha} \\nonumber \\\\\n& \\qquad \\qquad \\quad \\times \\sin^{2\\alpha}\\bigg( \\frac{t}{2} \\sqrt{z^2+4w}\\,\\bigg) \\nonumber \\\\\n& \\qquad \\qquad = \\int_{-\\infty}^\\infty \\dd{z} \\int_0^\\infty \\dd{w} \\rho(w) \\frac{4^\\alpha w^\\alpha}{(z^2+4w)^\\alpha} \\nonumber \\\\\n& \\qquad \\qquad \\quad \\times \\sin^{2\\alpha}\\bigg( \\frac{t}{2} \\sqrt{z^2+4w}\\,\\bigg). \\label{eq:C2Integral}\n\\end{align}\nThe evaluation of this integral is discussed in the next subsection and\nApp.~\\ref{app:C-2-alpha-t-derivation}.\nNow focusing on the ensemble average of the largest Schmidt eigenvalue,\n\\begin{align}\n& \\overline{\\lambda_1^\\alpha(t;\\Lambda)} = \\overline{\\bigg( 1 - \\sum_{l \\neq 1} \\lambda_l(t;\\Lambda)\\bigg)^\\alpha} \\nonumber \\\\\n& \\qquad\\,\\,\\quad= \\overline{\\Bigg[ 1 - \\int_{-\\infty}^\\infty \\dd{z} \\int_0^\\infty \\dd{w} R(z,w) \\frac{4 w}{z^2+4w}} \\nonumber \\\\\n& \\qquad \\qquad \\quad \\overline{\\times \\sin^{2\\alpha}\\bigg( \\frac{t}{2} \\sqrt{z^2+4w}\\,\\bigg) \\Bigg]^\\alpha}\\,. \\label{eq:LargestEigenvalueExactIntegral}\n\\end{align}\nBefore computing the above for a general power $\\alpha$, consider the $\\alpha =2$ case. Expanding the above expression gives a square of the integral term, which is a quadruple integral containing the product $R(z_1,w_1) R(z_2,w_2)$. Equation (\\ref{threepoint}) has two contributions, the diagonal term, where $(z_1,w_1) = (z_2,w_2)$, and the off-diagonal one. For an uncorrelated spectrum, any multi-point spectral correlation function is unity. Thus\n\\begin{align}\n\\overline{\\lambda_1^2(t;\\Lambda)} = &\\; 1 - 2 C_2(1;t)\\sqrt{\\Lambda} + C_2(2;t) \\sqrt{\\Lambda} \\nonumber \\\\\n& + C^2_2(1;t)\\, \\Lambda,\n\\end{align}\nwhere the off-diagonal term, $R_3(z_1,z_2)$, is responsible for the $\\mathcal{O}(\\Lambda)$ term.\nThis illustrates that to the leading $\\mathcal{O}(\\sqrt{\\Lambda})$, the diagonal term alone suffices, and the other terms contribute to higher than leading order. This simplifies the $\\overline{\\lambda_1^\\alpha}$ computation, where after the binomial expansion of Eq.~(\\ref{eq:LargestEigenvalueExactIntegral}), keeping only the terms contributing to the leading order gives,\n\\begin{align}\n& \\overline{ \\lambda_1^\\alpha(t;\\Lambda) } = 1 + \\sum_{p=1}^\\infty (-1)^p \\binom{\\alpha}{p} \\overline{ \\sum_{l\\neq 1} \\lambda_l^p }.\n\\end{align}\nFinally, the general moments Eq.~\\eqref{eq:GenericMoments}\nof the Schmidt eigenvalues for $\\alpha >1\/2$ are given by\n\\begin{equation}\n\\overline{\\mu_{\\alpha}(t;\\Lambda) } = 1 + \\sum_{p=1}^\\infty (-1)^p \\binom{\\alpha}{p} \\overline{ \\sum_{l\\neq 1} \\lambda_l^p }\\, +\\, \\overline{ \\sum_{l\\neq 1} \\lambda_l^\\alpha } .\n\\end{equation}\nThese can be written as\n\\begin{equation} \\label{eq:mu-alpha-t---C-alpha-t}\n\\overline{ \\mu_{\\alpha}(t;\\Lambda) } = 1 - C(\\alpha;t) \\sqrt{\\Lambda},\n\\end{equation}\nwhere\n\\begin{equation} \\label{eq:C-alpha-t-via-C-2}\nC(\\alpha;t) = \\sum_{p=1}^\\infty (-1)^{p+1} \\binom{\\alpha}{p} C_{2}(p;t) - C_{2}(\\alpha;t).\n\\end{equation}\nThese functions are central for the analytical\ndescription of the entropies and shown\nin Fig.~\\ref{fig:Calpha} for the COE and the CUE.\nA particular feature is the overshooting before the saturation sets in.\nFor the CUE the location of the maxima occurs slightly later in $t$\nthan for the COE and also the saturation regime is reached\nslightly later.\nMoreover, the saturation value is slightly larger than in the COE case.\n\n\\begin{figure}\n\\includegraphics[width=8.6cm]{fig_Calpha_COE.pdf}\n\n\\includegraphics[width=8.6cm]{fig_Calpha_CUE.pdf}\n\n\\caption{Plot of $C(\\alpha;t)$ for $\\alpha=2$ (solid),\n $\\alpha=3$ (dashed), $\\alpha=4$ (dot-dashed),\nand $\\pdv*{C(\\alpha;t)}{\\alpha}|_{\\alpha \\rightarrow 1}$ (dotted)\nfor (a) COE and (b) CUE.}\n\\label{fig:Calpha}\n\\end{figure}\n\n\nIn addition, it can be shown that $\\overline{ \\mu_\\alpha (t;\\Lambda) }$ is evaluated up to and including $\\mathcal{O}(\\sqrt{\\Lambda})$ by the first and second largest Schmidt eigenvalues\n\\begin{equation}\n \\overline{ \\mu_\\alpha (t;\\Lambda) }\n = \\overline{\\lambda_1^\\alpha(t;\\Lambda)} +\n \\overline{\\lambda_2^\\alpha(t;\\Lambda)},\n\\end{equation}\nas other Schmidt eigenvalues ($l > 2$) do not contribute to $ \\mathcal{O} ( \\sqrt{ \\Lambda } ) $.\nThis relation is vital for recursively invoking perturbation theory\nin Sec.~\\ref{sec:NonPerturbative}\nin order to extend the results beyond the perturbative regime.\n\n\\subsection{Entropies}\n\nThe HCT entropies can be computed in the perturbation regime using the results for the average eigenvalue moments. For $\\alpha \\neq 1$ one has\n\\begin{equation}\n\\overline{ S_\\alpha(t;\\Lambda)} = \\frac{C(\\alpha;t)}{\\alpha -1} \\sqrt{\\Lambda},\n\\end{equation}\nwhere $C(\\alpha; t)$ is given by Eq.~\\eqref{eq:C-alpha-t-via-C-2}.\nThis requires the computation of $C_2(\\alpha; t)$,\nwhich is done in App.~\\ref{app:C-2-alpha-t-derivation},\nand leads to\n\\begin{equation} \\label{eq:C-2-alpha-t}\n C_{2}(\\alpha;t) = 2^\\alpha \\sum_{q=0}^\\infty \\sum_{m=0}^q (-1)^q\n \\binom{\\alpha}{q} a_{qm} f_{m}(\\alpha;t),\n\\end{equation}\nwhere\n\\begin{equation}\na_{qm} = \\binom{q}{\\frac{q-m}{2}}\n \\left[ \\frac{1+(-1)^{q-m}}{2^q(1+\\delta_{m,0})} \\right]\n\\end{equation}\nand\n\\begin{align}\n f_m(\\alpha;t) = & \\int_{-\\infty}^\\infty \\dd{z} \\int_0^\\infty\n \\dd{w} \\frac{w^\\alpha \\rho(w)}{(z^2+4w)^\\alpha} \\nonumber \\\\\n & \\qquad \\times \\cos(m t \\sqrt{z^2+4w}\\,).\n\\end{align}\nExplicit expressions for $f_m(\\alpha;t)$\nfor the COE and CUE are derived in App.~\\ref{app:C-2-alpha-t-derivation-COE}\nand \\ref{app:C-2-alpha-t-derivation-CUE}, respectively.\n\n\n\n\\subsection{Discussion}\n\nTo discuss some qualitative features, the case $\\alpha = 2$, which corresponds\nto the linear entropy, is considered here.\nBy Eqs.~(\\ref{eq:GenericHCTEntropy}, \\ref{eq:mu-alpha-t---C-alpha-t})\none has $S_2(t;\\Lambda) = C(2; t) \\sqrt{\\Lambda}$.\nIn case of the COE\n\\begin{align}\n& C(2;t) = 4 \\pi t \\big( \\text{e}^{-t^2} [ \\{ 1 + 2 t^2 \\} I_0(t^2) + 2 t^2 I_1(t^2) ] \\nonumber \\\\\n& \\qquad \\qquad - 4 t^2 \\text{e}^{-4 t^2} [ I_0(4 t^2) + I_1(4 t^2) ] \\, \\big),\n\\end{align}\nwhere $I_n(z)$ is the modified Bessel function of the first kind\n\\cite[Eq.~10.25.2]{DLMF}.\nWhereas for the CUE case\n\\begin{align}\n& C(2;t) = \\pi t \\Big( 3 \\text{e}^{-t^2} - \\frac{1}{2}\\text{e}^{-4 t^2} \\Big) + \\pi^{3\/2} \\text{erf}(t) \\Big( \\frac{1}{2} + 3 t^2 \\Big) \\nonumber \\\\\n& \\qquad \\qquad + \\pi^{3\/2} \\text{erf}(2 t) \\Big( \\frac{1}{8} - 3 t^2 \\Big),\n\\end{align}\nwhere $\\text{erf}(z)$ is the error function.\nFor both COE and CUE cases, $C(2;t)$ for small $t$ has the expansion\n\\begin{align} \\label{eq:C-2-t}\n & C(2;t) = 4 \\pi t + \\mathcal{O}(t^3)\n\\end{align}\nand naturally gives linear-in-time entropy growth for short time $t$.\nThe same is true for other $\\alpha$-entropies, except for $\\alpha = 1$,\nfor which the leading term is of the order $\\mathcal{O}(t\\,\\ln t)$. In fact, it can be shown that for both COE and CUE cases, with $\\alpha > 1$ and short time $t$,\n\\begin{equation}\n\\dv{}{t} C(\\alpha;t) \\big|_{t \\rightarrow 0} = 2 \\pi \\alpha. \\label{eq:InitialRateCalpha}\n\\end{equation}\n\nIn the limit $t \\rightarrow \\infty$, saturation values of the entropies\ncan be obtained from\n\\begin{equation}\n S_2(\\infty; \\Lambda) = \\frac{C(\\alpha; \\infty)}{\\alpha-1} \\sqrt{\\Lambda},\n\\end{equation}\nwhich are of the order $\\mathcal{O}(\\sqrt{\\Lambda}\\,)$.\nUsing the explicit expressions for $f_m(\\alpha;t)$\nderived in App.~\\ref{app:C-2-alpha-t-derivation-COE}, \\ref{app:C-2-alpha-t-derivation-CUE}\none sees that in the limit $t \\rightarrow \\infty$,\n$f_m(\\alpha;t)$ vanish for all $m \\neq 0$.\n Using this fact, an expression for saturation value\nof the $\\alpha$-entropies ($\\alpha \\neq 1$) can be derived as\n\\begin{align}\n\\overline{S_\\alpha(\\infty, \\Lambda)} & = \\Bigg(\\alpha \\,\\, _{3}F_2(1\/2,3\/2,1-\\alpha\\,;\\,2,2\\,;\\,1) \\nonumber \\\\\n&\\; \\qquad \\quad - \\frac{2}{\\pi} \\,\\,\\frac{\\Gamma(\\alpha-1\/2)\\,\\Gamma(\\alpha+1\/2)}{\\Gamma(\\alpha)\\,\\Gamma(\\alpha+1)} \\Bigg) \\nonumber \\\\\n& \\; \\qquad \\times \\frac{\\sqrt{\\Lambda}}{\\alpha-1}\\begin{cases}\n\t\t\t\t\t\t\t\\sqrt{2\\pi} \\; \\quad \\quad \\text{for COE,} \\\\\n\t\t\t\t\t\t\t\\pi^{3\/2}\/2 \\, \\; \\quad \\text{for CUE.}\n\t\t\t\t\t\t \\end{cases}\n \\label{eq:sat-S-alpha}\n\\end{align}\nHere $_{m}F_n$ is a generalized hypergeometric function\n\\cite[Eq.~35.8.1]{DLMF} defined by\n\\begin{align}\n& _{m}F_n ( a_1 ,\\ldots, a_m ; b_1 ,\\ldots, b_n ; z) = \\sum_{k=0}^\\infty \\frac{(a_1)_k \\ldots (a_m)_k}{(b_1)_k \\ldots (b_n)_k} \\frac{z^k}{k!},\n\\end{align}\nwhere $(a)_k = \\Gamma(a+k)\/\\Gamma(a)$ is Pochhammer's symbol.\nEquation~\\eqref{eq:sat-S-alpha} shows that the saturation values\nfor both COE and CUE scale with $\\sqrt{\\Lambda}$\nand that the CUE case leads to a slightly (11\\%) larger value.\n\n\n\nFor the linear entropy, $\\alpha=2$, Eq.~\\eqref{eq:sat-S-alpha} simplifies to\n\\begin{align} \\label{eq:S2-sat-COE-CUE}\n\\overline{S_2(\\infty;\\Lambda)} = \\sqrt{\\Lambda}\n \\begin{cases}\n 5 \\sqrt{\\pi\/8} & \\text{for COE}, \\\\\n 5 \\pi^{3\/2}\/8 & \\text{for CUE}.\n \\end{cases}\n\\end{align}\nFor the special case of the von Neumann entropy for $\\alpha=1$,\n$\\lim_{t\\rightarrow \\infty} \\pdv*{C(\\alpha;t)}{\\alpha}|_{\\alpha \\rightarrow 1}$\nneeds to be computed. It can be shown that\n\\begin{align} \\label{eq:saturation-perturbatively}\n\\overline{S_1(\\infty;\\Lambda)}\n= & \\Big(\n 4 \\ln 2- \\frac{3}{16}\\, _{4}F_3(1,1,3\/2,5\/2 \\,;\\,2,3,3\\,;\\,1) \\Big)\n\\nonumber \\\\\n& \\times \\sqrt{\\Lambda} \\begin{cases}\n\t\t\t\t\t\t\t\\sqrt{2\\pi} \\; \\quad \\quad \\text{for COE,} \\\\\n\t\t\t\t\t\t\t\\pi^{3\/2}\/2 \\, \\; \\quad \\text{for CUE.}\n\t\t\t\t\t\t \\end{cases}\n\\end{align}\nAn extension to the non-perturbative result will be discussed\nin Sec.~\\ref{sec:long-time-saturation}.\n\nIn the perturbative regime, if the entropies $\\overline{ S_\\alpha (t;\\Lambda)}$ are scaled with respect to\ntheir saturation values,\n\\begin{equation}\n\\overline{ \\mathcal{S}_\\alpha (t) }\n = \\frac{\\overline{S_\\alpha(t; \\Lambda)}}\n {\\overline{S_\\alpha(\\infty; \\Lambda)}},\n\\label{eq:Universal-Scaling}\n\\end{equation}\nthey do not depend on the transition parameter,\nleading to one universal curve for each $\\alpha$ described\nby the prediction\n\\begin{equation}\n\\overline{ \\mathcal{S}_\\alpha (t) } = \\frac{C(\\alpha;t)}{C(\\alpha;\\infty)}.\n \\label{eq:UniversalCurveTheory}\n\\end{equation}\nThis universal property is depicted for the linear entropy in Fig.~\\ref{fig:UniversalCurvePlot} for various $\\Lambda$-values. As $\\Lambda$ goes beyond the perturbation regime, departure from the universal curve is seen due to the breakdown of the perturbation theory. In the forthcoming section, the extension of the theory to the non-perturbative regime is discussed.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.6cm]{fig_CUE_universal.pdf}\n\n \\caption{\\label{fig:UniversalCurvePlot} Scaled linear entropy\n $\\overline{ \\mathcal{S}_2(t) }$, Eq.~\\eqref{eq:Universal-Scaling}, for the\n random matrix transition ensemble in Eq.~(\\ref{eq:GenericFloquetRMT}) for\n the CUE case for $\\Lambda = 10^{-6}$ (magenta circles),\n $\\Lambda = 10^{-4}$ (red triangles), $\\Lambda = 10^{-2}$ (blue squares),\n and $\\Lambda = 1$ (green diamonds); The theoretical prediction\n Eq.~(\\ref{eq:UniversalCurveTheory}) for $\\alpha =2$ is shown as solid curve.}\n\\end{figure}\n\n\n\\section{Non-perturbative regime} \\label{sec:NonPerturbative}\n\nThe results obtained from the perturbation theory can be extended to the\nnon-perturbative regime to produce the full transition and the saturation\nvalues by employing the recursively embedded perturbation theory technique as\ndone in Ref.~\\cite{Lakshminarayan16,Tomsovic18c} for the eigenstates.\n\n\n\\subsection{Full transition}\n\n\nFor small enough $\\Lambda$, the time evolved state $\\ket{jk(n;\\epsilon)} \\equiv \\ket{jk(t;\\Lambda)}$ can be Schmidt decomposed as\n\\begin{equation}\n\\ket{jk(t;\\Lambda)} = \\sqrt{\\lambda_1(t;\\Lambda)} \\,\\ket{(jk)_1} + \\sqrt{\\lambda_2(t;\\Lambda)} \\,\\ket{(jk)_2},\n\\end{equation}\nsuch that $\\lambda_1 + \\lambda_2 = 1$, where the time-dependent phase-factor is absorbed into the definition of the Schmidt eigenvectors $\\ket{ (jk)_l }$. Now increasing the interaction strength, another unperturbed state energetically close to $\\ket{(jk)_1}$ will contribute to $\\ket{jk(n;\\epsilon)}$,\n\\begin{align}\n& \\ket{jk(t;\\Lambda)} = \\sqrt{\\lambda_1'(t;\\Lambda)} \\,\\bigg(\\sqrt{\\lambda_1(t;\\Lambda)} \\,\\ket{(jk)_1} + \\nonumber \\\\\n& \\qquad \\qquad \\quad \\sqrt{\\lambda_2(t;\\Lambda)} \\,\\ket{(jk)_2}\\bigg) + \\sqrt{\\lambda_2'(t;\\Lambda)} \\, \\ket{(jk)_3},\n\\end{align}\nwhere $\\lambda_{1,2}'$ follow same statistical properties as the unprimed ones. Thus the purity is $\\mu_2' = \\lambda_1^{'2} \\lambda_1^2 + \\lambda_1^{'2} \\lambda_2^2 + \\lambda_2^{'2}$ giving\n\\begin{align}\n\\mu_2' - \\mu_2 = -(1-\\lambda_1^{'2} - \\lambda_2^{'2})\\mu_2 + \\lambda_2^{'2} (1-\\mu_2).\n\\end{align}\n\n\n\\begin{figure}\n\\includegraphics[width=8.4cm]{fig_COE_TP_1e-06_N_50.pdf}\n\n\\includegraphics[width=8.4cm]{fig_COE_TP_0p0001_N_50}\n\n\\includegraphics[width=8.4cm]{fig_COE_TP_0p01_N_50.pdf}\n\n\\includegraphics[width=8.4cm]{fig_COE_TP_1_N_50}\n\\caption{\\label{fig:COE_Salpha} Entropies $\\overline{ S_\\alpha }$ for the COE\n case with $N_A=N_B=50$ for (a) $\\Lambda=10^{-6}$, (b) $\\Lambda=10^{-4}$, (c)\n $\\Lambda=10^{-2}$, and (d) $\\Lambda=1$ for $\\alpha=1$ (green diamonds),\n $\\alpha=2$ (magenta circles), $\\alpha=3$ (red triangles), and $\\alpha=4$\n (blue squares). Black lines show the corresponding theory curves,\n Eq.~(\\ref{eq:alphaEntropyTheory}).}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=8.4cm]{fig_CUE_TP_1e-06_N_50.pdf}\n\n\\includegraphics[width=8.4cm]{fig_CUE_TP_0p0001_N_50}\n\n\\includegraphics[width=8.4cm]{fig_CUE_TP_0p01_N_50.pdf}\n\n\\includegraphics[width=8.4cm]{fig_CUE_TP_1_N_50}\n\\caption{\\label{fig:CUE_Salpha} Entropies $\\overline{ S_\\alpha }$ for the CUE\n case with $N_A=N_B=50$ for (a) $\\Lambda=10^{-6}$, (b) $\\Lambda=10^{-4}$, (c)\n $\\Lambda=10^{-2}$, and (d) $\\Lambda=1$ for $\\alpha=1$ (green diamonds),\n $\\alpha=2$ (magenta circles), $\\alpha=3$ (red triangles), and $\\alpha=4$\n (blue squares). Black lines show the corresponding theory curves,\n Eq.~(\\ref{eq:alphaEntropyTheory}).}\n\\end{figure}\n\nFor a given $\\alpha$, following this technique and replacing $\\lambda_{1,2}^{'\\alpha}$ with their ensemble average, a differential equation for the moments $\\overline{ \\mu_\\alpha (t;\\Lambda) }$ can be derived, good up to $\\mathcal{O}(\\sqrt{\\Lambda})$,\n\\begin{equation}\n\\pdv{\\overline{ \\mu_\\alpha(t;\\Lambda) }}{\\sqrt{\\Lambda}} = - C(\\alpha;t) \\overline{ \\mu_\\alpha (t;\\Lambda)}.\n\\end{equation}\nThis has a solution of the form (valid for the infinite dimensional case)\n\\begin{equation}\n\\overline{ \\mu_\\alpha(t;\\Lambda)} \\approx \\exp(-C(\\alpha;t) \\sqrt{\\Lambda}).\n\\end{equation}\nIn the limit $\\Lambda \\rightarrow \\infty$, and for large (but finite)\ndimensionality $N=N_A =N_B$, the moments tend to the random matrix result\n\\begin{equation}\n\\overline{ \\mu_\\alpha^\\infty } = \\mathcal{C}_\\alpha\/N^{\\alpha-1},\n\\end{equation}\nwhere the $\\mathcal{C}_\\alpha$ are Catalan numbers \\cite[\\S26.5.]{DLMF}.\nFor the $N_A \\neq N_B$ case such an expression can be found following Ref.~\\cite{Sommers04}. Incorporating this limit\ninto the above differential equation solution gives an approximate expression for the moments valid for any $\\Lambda$,\n\\begin{equation}\n\\overline{ \\mu_\\alpha (t;\\Lambda) } \\approx \\exp(- \\frac{C(\\alpha;t)}{1-\\overline{\\mu_\\alpha^\\infty}}\\sqrt{\\Lambda})(1-\\overline{\\mu_\\alpha^\\infty}) + \\overline{ \\mu_\\alpha^\\infty}.\n\\end{equation}\nUsing the definition of the HCT entropies (\\ref{eq:GenericHCTEntropy})\ngives\n\\begin{equation}\n\\overline{S_\\alpha(t;\\Lambda) } \\approx \\bigg[ 1 - \\exp(-\\frac{C(\\alpha;t)}{(\\alpha-1)\\overline{ S_\\alpha^\\infty}}\\sqrt{\\Lambda})\\bigg] \\overline{S_\\alpha^\\infty}, \\label{eq:alphaEntropyTheory}\n\\end{equation}\nwhere\n\\begin{equation}\n\\overline{ S_\\alpha^\\infty} = \\frac{1-\\mathcal{C}_\\alpha N^{1-\\alpha}}{\\alpha-1}.\n\\end{equation}\nTo apply Eq.~\\eqref{eq:alphaEntropyTheory}\none has to use for $C(\\alpha; t)$ the results\ncorresponding to the CUE or the COE,\nas given by Eq.~\\eqref{eq:C-alpha-t-via-C-2}.\nWhen $\\Lambda$ is large, however,\nthere is no difference between CUE and COE\ndue to the same scaling of $C(\\alpha; t)$,\nas for example in Eq.~\\eqref{eq:C-2-t} for $\\alpha=2$.\n\n\nThe result Eq.~(\\ref{eq:alphaEntropyTheory}) is in agreement with\nnumerical computations for both the COE, see Fig.~\\ref{fig:COE_Salpha},\nand the CUE, see Fig.~\\ref{fig:CUE_Salpha}.\nFor these numerical calculations, 20 realizations of the random matrix model Eq.~\\eqref{eq:GenericFloquetRMT} for $N_A=N_B=50$ have been used, leading to a total of $5 \\times 10^4$ initially unentangled eigenstates $\\ket{jk}$ used for averaging. This amount of averaging is\nparticularly relevant for small values of $\\Lambda$\nfor which the time evolution of the entanglement of the individual\nstates shows strong fluctuations from one state to another.\nThese are also the origin of the small fluctuations\nseen in both figures for $\\Lambda=10^{-6}$\nfor the von Neumann entropy $\\overline{S_1(t; \\Lambda)}$,\nwhich is the most sensitive of the considered entropies.\nMoreover, at small $\\Lambda$, finite $N$ effects\nbecome visible, in particular for the COE,\ndue to the small overall amount of entanglement.\nIncreasing the matrix dimension of the subsystems\nto $N=100$ improves the agreement with the theoretical prediction\n(not shown).\nFor $\\Lambda=10^{-4}$ and $\\Lambda=10^{-2}$ excellent\nagreement of the numerically computed entropies and\nthe theory is found.\nFor $\\Lambda=1$ again the von Neumann entropy shows\nsmall deviations from the theoretical prediction.\n\n\\subsection{Long-time saturation}\n\\label{sec:long-time-saturation}\n\n\\begin{figure}[b]\n\\centering\n \\includegraphics[width=8.4cm]{fig_saturation_logscale.pdf}\n\\caption{\\label{fig:saturation}\nSaturation values of the linear entropy, $\\overline{S_2(\\infty;\\Lambda)}$, as a function of $\\Lambda$\nfor the COE (blue squares) and CUE (red diamonds) in comparison\nwith the prediction Eq.~\\eqref{eq:sat-alphaEntropyTheory}\n(dashed and solid black lines representing COE and CUE, respectively).}\n\\end{figure}\n\n\nUsing the result Eq.~\\eqref{eq:alphaEntropyTheory}\none can also perform the long-time limit to\nobtain a prediction for the saturation\nvalues $\\overline{S_\\alpha(\\infty;\\Lambda) }$\ngoing beyond the perturbative result Eq.~\\eqref{eq:saturation-perturbatively}.\nThus one gets\n\\begin{equation}\n \\overline{S_\\alpha(\\infty;\\Lambda) } = \\bigg[ 1 - \\exp(-\\frac{C(\\alpha;\\infty)}{(\\alpha-1)\\overline{ S_\\alpha^\\infty}}\\sqrt{\\Lambda})\\bigg] \\overline{S_\\alpha^\\infty}. \\label{eq:sat-alphaEntropyTheory}\n\\end{equation}\nFor large $\\Lambda$ the exponential becomes very small\nso that the saturation reaches $\\overline{S_\\alpha^\\infty}$.\nHowever, for small $\\Lambda$ a reduced saturation value is obtained.\nFigure~\\ref{fig:saturation} illustrates\nthis for the linear entropy for the COE and CUE\nwhere Eq.~\\eqref{eq:S2-sat-COE-CUE} is used for $C(\\alpha, \\infty)$.\nVery good agreement of the prediction with the numerical results\nis found.\nUp to $\\Lambda=10^{-1}$ the saturation value\nfollows Eq.~\\eqref{eq:S2-sat-COE-CUE}\nand then the behavior given by Eq.~\\eqref{eq:sat-alphaEntropyTheory}\nsets in.\nThe saturation values for the COE are below\nthose of the CUE but eventually both approach $\\overline{S_2^\\infty}$.\n\n\\section{Coupled kicked rotors}\n\\label{sec:coupled-kicked-rotors}\n\n\\begin{figure}\n\\includegraphics[width=8.4cm]{fig_KR_TP_1e-06_N_100.pdf}\n\n\\includegraphics[width=8.4cm]{fig_KR_TP_0p0001_N_100.pdf}\n\n\\includegraphics[width=8.4cm]{fig_KR_TP_0p01_N_100.pdf}\n\n\\includegraphics[width=8.4cm]{fig_KR_TP_1_N_100.pdf}\n\n\\caption{\\label{fig:KR_Salpha} Entropies $\\overline{ S_\\alpha }$ for the coupled kicked rotors with completely broken time-reversal invariance and $N=100$, for (a) $\\Lambda=10^{-6}$, (b) $\\Lambda=10^{-4}$, (c)\n $\\Lambda=10^{-2}$, and (d) $\\Lambda=1$ for $\\alpha=1$ (green diamonds),\n $\\alpha=2$ (magenta circles), $\\alpha=3$ (red triangles), and $\\alpha=4$\n (blue squares). Black lines show the corresponding theory curves,\n Eq.~(\\ref{eq:alphaEntropyTheory}).}\n\\end{figure}\n\nA bipartite system whose subsystems exhibit classical chaotic motion is considered here to compare against the universal entanglement dynamics results derived from random matrix theory. The knowledge of $\\Lambda$ and its relation to the system dependent details is crucial for the comparison.\nIn case of a system whose subsystems are kicked rotors quantized on the unit torus, it is possible to analytically find $\\Lambda$ as a function of system dependent details as shown in Refs.~\\cite{Srivastava16,Tomsovic18c}. The Floquet unitary operator of the system has the form given by Eq.~(\\ref{eq:GenericFloquet}) where the subsystem Floquet operator for one kicked rotor is\n\\begin{equation}\nU_A = \\exp[- i p_A^2\/(2\\hbar)] \\exp(-i V_A\/\\hbar),\n\\end{equation}\nwith kicking potential given by\n\\begin{equation}\nV_A = K_A \\cos(2\\pi q_A)\/4\\pi^2,\n\\end{equation}\nwhere $K_A$ is the kicking strength. Similarly for subsystem $B$. The entangling operator is\n\\begin{equation}\nU_{AB}(b) = \\exp(-i b V_{AB}\/\\hbar),\n\\end{equation}\nwhere the interaction potential is\n\\begin{equation}\nV_{AB} = \\frac{1}{4\\pi^2} \\cos[2\\pi(q_A+q_B)].\n\\end{equation}\nThe angle variables $q_j$ is restricted to the interval $[0,1)$, and similarly for the momenta $p_j$. This restriction leads to a 4-dimensional torus phase space for the corresponding classical system \\cite{Froeschle71,Lakshminarayan2001,Richter14}. The kicking strengths $(K_A,\\, K_B) = (10,14),\\,(18,22),\\, \\ldots$ with up to 20 realizations are chosen such that the classical dynamics is chaotic. The boundary conditions are chosen such that both time-reversal invariance and parity symmetry are broken. Thus the subsystem spectral fluctuations are approximately like those of the CUE. In addition we use $N=N_A = N_B = 100$ for the numerical computations. The transition parameter for the coupled kicked rotors is \\cite{Srivastava16,Tomsovic18c,HerKieFriBae2019:p}\n\\begin{equation}\n\\Lambda_{\\text{KR}} \\simeq \\frac{N^2}{4\\pi^2} \\left(1-J_0^2(Nb\/2\\pi) \\right) \\approx \\frac{N^4 b^2}{32 \\pi^4},\n\\end{equation}\nwhere $J_0(\\cdot)$ is the Bessel function of first kind\n\\cite[Eq.~10.2.2]{DLMF},\nand the approximation is true when $Nb \\ll 1$. In Fig.~\\ref{fig:KR_Salpha}, the entanglement dynamics for various $\\Lambda$-values of the coupled kicked rotors is shown against the theory given by Eq.~(\\ref{eq:alphaEntropyTheory}).\nOverall good agreement is found\nwith some small deviations for the von Neumann entropy\nwhich are similar to those found for the\nCUE case shown in Fig.~\\ref{fig:CUE_Salpha}.\n\n\\section{Summary and outlook}\n\\label{sec:summary-and-outlook}\n\nAn analytic theory is given in this paper for the rate of entanglement production for a quenched system as a function of the interaction strength between chaotic subsystems. In particular, all the expressions are given in terms of the universal transition parameter $\\Lambda$. It is shown that in the perturbative regime for an initial product of subsystem eigenstates (a so-called quench), the entanglement saturates at very small values proportional to $\\sqrt{\\Lambda}$. Furthermore, in the same regime, once the appropriate time scale is properly identified and the entanglement entropies are scaled by their saturation value, there exists a single universal entropy production curve: for a given system size, the interaction strength determines $\\Lambda$, which determines the time scale and saturation values, and there is no other dependence in the entropy production beyond that. The universal curve has an overshoot, which is slightly more pronounced for the time reversal non-invariant case, and then it settles down to a saturation value. As $\\Lambda$ increases, the perturbation regime eventually breaks down, roughly for $\\Lambda \\gtrsim 10^{-2}$, as illustrated in Fig.~\\ref{fig:UniversalCurvePlot}.\n\nAs for the full eigenstates of the interacting system \\cite{Tomsovic18c}, it was also possible here to recursively embed the perturbation theory. This enables a description of the full transition in entropy production behaviors as a function of subsystem interaction strength and size, the limiting behaviors being no entanglement entropy production for non-interacting systems, and for strongly interacting systems the production behavior seen for initially random product states. The expressions are uniformly valid for all times and interaction strengths. It also turns out that the initial entropy production rate is even independent of whether time reversal symmetry is preserved or not.\n\nThe present study also raises various interesting\nquestions to be addressed in the future:\nthe considered case of initial states given by direct products of subsystem eigenstates has the crucial property\nthat the automatic Schmidt decomposition holds,\nwhich allows for a perturbative treatment.\nIf one considers instead,\nfor example, sums of such eigenstates or\ndirect products of subsystem random vectors, then a much faster entanglement\ngeneration occurs, which requires a completely different\ntheoretical description.\nMoreover, although not shown, the fluctuations of the entropies seen from one initial\nstate to another depend dramatically on whether it is subsystem eigenvectors or random states which are being considered. This should also be reflected in the statistics\nof Schmidt eigenvalues, which are expected to show\nheavy-tailed distributions as was found before in the case of eigenstates.\nAnother interesting ensemble for the case of a dynamical system\nlike the coupled kicked rotors are coherent states\nas initial states. There one will have an initial phase\nfor which the entanglement only grows very slowly\nup to the Ehrenfest time beyond which a fast increase\nof entanglement occurs.\nFinally, bipartite many-body systems, like an interacting spin-chain,\nshould share many of the features\nof the entanglement production demonstrated here,\nand at the same time also allow for even more possibilities\nof initial states.\n\n\n\\acknowledgments\n\nWe would like to thank Maximilian Kieler for useful discussions.\nOne of the authors (ST) gratefully acknowledges support for visits\nto the Max-Planck-Institut f\\\"ur Physik komplexer Systeme.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Figure Captions\\markboth\n {FIGURECAPTIONS}{FIGURECAPTIONS}}\\list\n {Figure \\arabic{enumi}:\\hfill}{\\settowidth\\labelwidth{Figure\n999:}\n \\leftmargin\\labelwidth\n \\advance\\leftmargin\\labelsep\\usecounter{enumi}}}\n\\let\\endfigcap\\endlist \\relax\n\\def\\tablecap{\\section*{Table Captions\\markboth\n {TABLECAPTIONS}{TABLECAPTIONS}}\\list\n {Table \\arabic{enumi}:\\hfill}{\\settowidth\\labelwidth{Table\n999:}\n \\leftmargin\\labelwidth\n \\advance\\leftmargin\\labelsep\\usecounter{enumi}}}\n\\let\\endtablecap\\endlist \\relax\n\\def\\reflist{\\section*{References\\markboth\n {REFLIST}{REFLIST}}\\list\n {[\\arabic{enumi}]\\hfill}{\\settowidth\\labelwidth{[999]}\n \\leftmargin\\labelwidth\n \\advance\\leftmargin\\labelsep\\usecounter{enumi}}}\n\\let\\endreflist\\endlist \\relax\n\\def\\list{}{\\rightmargin\\leftmargin}\\item[]{\\list{}{\\rightmargin\\leftmargin}\\item[]}\n\\let\\endquote=\\endlist\n\\makeatletter\n\\newcounter{pubctr}\n\\def\\@ifnextchar[{\\@publist}{\\@@publist}{\\@ifnextchar[{\\@publist}{\\@@publist}}\n\\def\\@publist[#1]{\\list\n {[\\arabic{pubctr}]\\hfill}{\\settowidth\\labelwidth{[999]}\n \\leftmargin\\labelwidth\n \\advance\\leftmargin\\labelsep\n \\@nmbrlisttrue\\def\\@listctr{pubctr}\n \\setcounter{pubctr}{#1}\\addtocounter{pubctr}{-1}}}\n\\def\\@@publist{\\list\n {[\\arabic{pubctr}]\\hfill}{\\settowidth\\labelwidth{[999]}\n \\leftmargin\\labelwidth\n \\advance\\leftmargin\\labelsep\n \\@nmbrlisttrue\\def\\@listctr{pubctr}}}\n\\let\\endpublist\\endlist \\relax\n\\makeatother\n\\newskip\\humongous \\humongous=0pt plus 1000pt minus 1000pt\n\\def\\mathsurround=0pt{\\mathsurround=0pt}\n\\def\\eqalign#1{\\,\\vcenter{\\openup1\\jot \\mathsurround=0pt\n \\ialign{\\strut \\hfil$\\displaystyle{##}$&$\n \\displaystyle{{}##}$\\hfil\\crcr#1\\crcr}}\\,}\n\\newif\\ifdtup\n\\def\\panorama{\\global\\dtuptrue \\openup1\\jot \\mathsurround=0pt\n \\everycr{\\noalign{\\ifdtup \\global\\dtupfalse\n \\vskip-\\lineskiplimit \\vskip\\normallineskiplimit\n \\else \\penalty\\interdisplaylinepenalty \\fi}}}\n\\def\\eqalignno#1{\\panorama \\tabskip=\\humongous\n \\halign to\\displaywidth{\\hfil$\\displaystyle{##}$\n \\tabskip=0pt&$\\displaystyle{{}##}$\\hfil\n \\tabskip=\\humongous&\\llap{$##$}\\tabskip=0pt\n \\crcr#1\\crcr}}\n\\relax\n\n\n\n\\def\\begin{equation}{\\begin{equation}}\n\\def\\end{equation}{\\end{equation}}\n\\def\\begin{eqnarray}{\\begin{eqnarray}}\n\\def\\end{eqnarray}{\\end{eqnarray}}\n\\def\\bar{\\partial}{\\bar{\\partial}}\n\\def\\bar{J}{\\bar{J}}\n\\def\\partial{\\partial}\n\\def\\noindent{\\noindent}\n\n\n\n\\def\\kappa{\\kappa}\n\\def\\rho{\\rho}\n\\def\\alpha{\\alpha}\n\\def\\Alpha{\\Alpha}\n\\def\\beta{\\beta}\n\\def\\Beta{\\Beta}\n\\def\\gamma{\\gamma}\n\\def\\Gamma{\\Gamma}\n\\def\\delta{\\delta}\n\\def\\Delta{\\Delta}\n\\def\\epsilon{\\epsilon}\n\\def\\Epsilon{\\Epsilon}\n\\def\\p{\\pi} \n\\def\\Pi{\\Pi}\n\\def\\chi{\\chi}\n\\def\\Chi{\\Chi}\n\\def\\theta{\\theta}\n\\def\\Theta{\\Theta}\n\\def\\mu{\\mu}\n\\def\\nu{\\nu}\n\\def\\omega{\\omega}\n\\def\\Omega{\\Omega}\n\\def\\lambda{\\lambda}\n\\def\\Lambda{\\Lambda}\n\\def\\s{\\sigma} \n\\def\\Sigma{\\Sigma}\n\\def\\varphi{\\varphi}\n\n\n\\def\\relax{\\rm I\\kern-.18em R}{\\relax{\\rm I\\kern-.18em R}}\n\\def\\relax{\\rm 1\\kern-.35em1}{\\relax{\\rm 1\\kern-.35em1}}\n\n\\renewcommand{\\thesection.\\arabic{equation}}}{\\thesection.\\arabic{equation}}\n\\csname @addtoreset\\endcsname{equation}{section}\n\n\n\n\n\n\n\\def${\\cal N}=4${${\\cal N}=4$}\n\\def\\alpha _S{\\alpha _S}\n\\def\\boldsymbol {\\boldsymbol }\n\n\\newcommand{\\Feyn}[1]{#1\\kern-0.45em\/}\n\n\\headheight 10 pt\n\n\n\n\n\\begin{document}\n\n\\title{\\Large \\bf Dual conformal invariance in the Regge limit}\n\\author{\\large C{\\'e}sar~G{\\'o}mez, Johan~Gunnesson, Agust{\\'i}n~Sabio~Vera \\\\\n{\\it Instituto de F{\\' i}sica Te{\\' o}rica UAM\/CSIC,}\\\\ \n{\\it Universidad Aut{\\' o}noma de Madrid, E-28049 Madrid, Spain}}\n\n\\maketitle\n\n\\vspace{-9cm}\n\\begin{flushright}\n{\\small IFT--UAM\/CSIC--09--38}\n\\end{flushright}\n\n\\vspace{7cm}\n\\begin{abstract}\n\\noindent\n\nA dual conformal symmetry, analogous to the dual conformal symmetry observed for the scattering amplitudes of ${\\cal N}=4$ Super Yang-Mills theory, is identified in the Regge limit of QCD. Combined with the original two-dimensional conformal symmetry of the theory, this dual symmetry can potentially explain the integrability of the BFKL Hamiltonian. We also give evidence that the symmetry survives when a subset of unitarity corrections are taken into account by studying briefly the non-planar $2$ to $m$ reggeon transition vertices.\n \n\\end{abstract}\n\n\n\\section{Introduction}\n\nIn the last few years there has been a great deal of progress in the study of gluon scattering amplitudes in \nthe maximally supersymmetric gauge theory in four dimensions, ${\\cal N}=4$ Super Yang-Mills (SYM). One of the most surprising \ndevelopments has been the discovery of a hidden symmetry in the planar ($N_c \\rightarrow \\infty$) limit, coined \nas ``dual super-conformal symmetry''~\\cite{dualconformal, korchemskyconf,confward}, different from the original \nsuper-conformal symmetry of the Lagrangian. This symmetry was uncovered by introducing a new set of variables \n$x_i$, related to the external (all taken as incoming) gluon momenta $p_i$, $i=1\\ldots n$, through\n\\begin{equation}\nx_i - x_{i+1}=p_i \\ , \\label{eq:xi}\n\\end{equation}\nand acts on the $x_i$ just as a four-dimensional conformal symmetry acts on spatial coordinates. The presence \nof this dual symmetry can be understood through the AdS\/CFT correspondence~\\cite{AdSCFT} since it was \nshown~\\cite{fermionicT} that the problem of calculating a given scattering amplitude can be mapped, \nthrough a fermionic T-duality, to that of calculating a light-like Wilson loop with corners at coordinates \ngiven by the $x_i$. This fermionic T-duality maps the string $\\sigma$-model to itself, and the dual conformal \nsymmetry becomes the ordinary symmetry of the space in which the Wilson loop lives. \n\nLike the ordinary conformal symmetry, the dual symmetry is broken by infrared divergences, arising as cusp \ndivergences in the language of Wilson loops. However, the cusp divergences are known to exponentiate, which \nallows the use of the broken symmetry to impose powerful constraints on the amplitudes in the form of anomalous \nWard identities. These identities fix the 4 and 5 point amplitudes completely while the undetermined parts of \nhigher-point amplitudes can only depend on dual-conformal invariants. Also, taken together, the original and \ndual conformal symmetries generate an infinite-dimensional Yangian symmetry~\\cite{plefka}, ordinarily characteristic \nof exactly solvable models.\n\nInteresting properties of ${\\cal N}=4$ amplitudes appear also in their high energy (Regge) limit. In a nutshell, Regge theory \nestablishes the structure of scattering amplitudes when the momentum transfer is small compared to the total \ncenter-of-mass energy. It turns out that ${\\cal N}=4$ amplitudes exhibit Regge-like behaviour at all orders in the 't \nHooft coupling even outside of the Regge limit. In fact, the 4 and 5 point amplitudes are Regge \nexact~\\cite{korchemskyconf, AgustinBartelsLipatov}, \nmeaning that they can always be written in a factorized form characteristic of high energies, irrespective of \nthe values of the kinematical invariants\\footnote{Also, a proposal for the undetermined part of the 6 point amplitude, having the correct Regge behaviour and given in terms of conformal cross-ratios, is given in the latest version of \\cite{AgustinBartelsLipatov2}. }. Furthermore, in the Leading Logarithmic Approximation (LLA) of gluon \namplitudes the Regge limit is independent of the gauge theory, so ${\\cal N}=4$ can give insight into the high energy \nbehaviour of QCD.\n\nIn the Regge limit amplitudes are dominated by the $t$-channel exchange of reggeized gluons (a reggeized gluon is \na collective state of ordinary gluons projected on a colour octet). The bound state of two reggeized gluons when \nprojected on a colour singlet in the $t$-channel is known as the hard (or perturbative) pomeron. The interaction between \nreggeized gluons is governed by the Schr{\\\"o}dinger-like BFKL integral equation~\\cite{BFKL} where the invariant mass of \n$s$-channel gluons can be interpreted as the time variable and its kernel as an effective Hamiltonian living on the \ntwo-dimensional transverse space. This Hamiltonian is free from infrared singularities and carries a yet to be understood \nintegrability~\\cite{BFKLintegrable} \\footnote{Integrability also appears when the gluon composite states are projected onto the\nadjoint representation~\\cite{Lipatov:2009nt}.}.\n\nThe question that arises is if the integrable structures present in ${\\cal N}=4$ SYM can shed some light on the integrability \nfound in the Regge limit. In this region the dynamics of the theory is reduced to the transverse plane, where a two-dimensional \nconformal symmetry was found in the effective Hamiltonian~\\cite{Lipatov1986}. Given the emergence of the Yangian in the \nfour-dimensional case, and that one can heuristically interpret this $SL(2,C)$ as a reduction of the four-dimensional ordinary conformal \nsymmetry, it would then seem natural to look for a dual $SL(2,C)$ symmetry in the high-energy limit\\footnote{Hints in this direction have already appeared in the literature. A similar dual symmetry was exploited in \\cite{Lipatov:2009nt} in order to map supersymmetric multiparticle amplitudes in multiregge kinematics to an integrable open spin chain. In fact, the octet kernel, after subtraction of infrared divergences, can be written in a form manifestly invariant under this symmetry. Also, the BFKL Hamiltonian has holomorphic separability into two pieces which can be written such that they are invariant under the duality transformation $p_i \\to \\rho_i - \\rho_{i+1}\n\\to p_{i+1}$, similar to the change of variables \\eqref{eq:xi}, with the $\\rho$ being the gluon transverse coordinates in\ncomplex notation~\\cite{Lipatov:1998as}.}. It is this question that we address in this Letter, showing that BFKL indeed exhibits covariance under such a dual symmetry.\n\nWe will also study a set of corrections to BFKL, in the form of $2\\rightarrow m$ reggeized gluon transition \nvertices, which also turn out to be dual $SL(2,C)$-covariant. This result is important for high-energy QCD, since the inclusion of such vertices is necessary, at sufficiently high energies, to fulfill unitarity in all channels. \n\n\n\\section{The dual $SL(2,C)$ symmetry}\n\n\n\\begin{figure}[ht]\n\\psfrag{+}{$+$} \\psfrag{=}{$=$} \\psfrag{ka}{$k_A$} \\psfrag{kb}{$k_B$}\n\\psfrag{kp}{$k'$} \\psfrag{F}{$F$} \\psfrag{kamq}{$k_A - q$}\n\\psfrag{kbmq}{$k_B - q$} \\psfrag{x1}{$x_1$} \\psfrag{x2}{$x_2$}\n\\psfrag{x3}{$x_3$} \\psfrag{x4}{$x_4$}\n\\begin{center}\n\\includegraphics{BFKL2.eps}\n\\caption{\\small The BFKL integral equation for the four-point reggeized gluon\nGreen function.} \\label{fig:BFKL}\n\\end{center}\n\\end{figure}\n\nThe scattering amplitude for the 2 to 2 reggeized gluons process in the Regge limit has an iterative structure \ndominated by the exchange in the $t$-channel of a colour singlet. This implies that in the LLA the corresponding \n4 point gluon Green function can be written as the solution to an integral \nequation, the BFKL equation \\footnote{For an introductory treatment of the BFKL equation, see \\cite{rossforshaw}}, shown in Fig.~\\ref{fig:BFKL}. Written in terms of $\\omega$, the Mellin conjugate variable of the \ncenter-of-mass energy (which can be translated into \nthe rapidity, $Y$, of the emitted particles in the $s$-channel)), and the incoming two dimensional momenta it reads \n\\begin{equation}\n\\omega F(\\omega , \\, \\boldsymbol {k}_A,\\, \\boldsymbol {k}_B,\\, \\boldsymbol {q}) = \\delta^{(2)}(\\boldsymbol {k}_A - \\boldsymbol {k}_B) + \\int d^2\\boldsymbol {k}' K(\\boldsymbol {k}_A,\\,\n \\boldsymbol {k}_A-\\boldsymbol {q};\\, \\boldsymbol {k}',\\, \\boldsymbol {k}'-\\boldsymbol {q}) F(\\omega , \\, \\boldsymbol {k}',\\, \\boldsymbol {k}_B ,\\, \\boldsymbol {q}) \\, \\label{eq:BFKLconK} \n\\end{equation}\nwhere the kernel $K(\\boldsymbol {k}_A,\\, \\boldsymbol {k}_A-\\boldsymbol {q};\\, \\boldsymbol {k}',\\, \\boldsymbol {k}'-\\boldsymbol {q})$ is given by\n\\begin{align}\n\\frac{K_R (\\boldsymbol {k}_A,\\, \\boldsymbol {k}_A-\\boldsymbol {q}; -\\boldsymbol {k}'+\\boldsymbol {q}, \\, -\\boldsymbol {k}')}{8\\pi^3\\boldsymbol {k}_A^2(\\boldsymbol {k}' - \\boldsymbol {q})^2} \n+ \\left[ \\omega (\\boldsymbol {k}_A^2) + \\omega ((\\boldsymbol {k}_A-\\boldsymbol {q})^2) \\right]\\delta ^{(2)}(\\boldsymbol {k}_A-\\boldsymbol {k}') \\ . \\label{eq:Kernel}\n\\end{align}\nThe ``real emission'' part\\footnote{Due to the optical theorem when the forward ${\\bf q}=0$ limit is taken this piece in \nthe kernel corresponds to the contribution to multiparticle production from on-shell gluons in the $s$-channel.} \nhas the following structure\n\\begin{equation}\nK_R (\\boldsymbol {p}_1,\\, \\boldsymbol {p}_2;\\, \\boldsymbol {p}_3,\\, \\boldsymbol {p}_4) = -N_cg^2\\left[ \\left( \\boldsymbol {p_3}+\\boldsymbol {p_4} \\right)^2 - \\frac{\\boldsymbol {p}_2^2\\boldsymbol {p}_4^2}{(\\boldsymbol {p}_2+\\boldsymbol {p}_3)^2}- \\frac{\\boldsymbol {p}_1^2\\boldsymbol {p}_3^2}{(\\boldsymbol {p}_1+\\boldsymbol {p}_4)^2} \\right]. \\label{eq:KernelR}\n\\end{equation}\nThis notation, with $p_1,\\, \\ldots ,\\, p_4$ being the cyclically ordered reggeized gluon momenta taken as incoming, \nwill be convenient for the generalization of this vertex to the $2 \\rightarrow m$ reggeized gluon transition case \nas we will see below. \n\nThe gluon Regge trajectory reads\n\\begin{equation}\n\\omega (\\boldsymbol {q}^2) = -\\frac{g^2N_c}{16 \\pi ^3 }\\int d^2\\boldsymbol {k}' \\frac{\\boldsymbol {q}^2}{\\boldsymbol {k}'^2(\\boldsymbol {k}'-\\boldsymbol {q})^2} \\ . \n\\label{eq:trajectoryreggegluon}\n\\end{equation}\nThe trajectory is IR divergent, requiring it to be regularized in general.\n\nWe will now show that the BFKL equation in Eq.~\\eqref{eq:BFKLconK} exhibits formally a dual $SL(2,C)$ symmetry, which, \nin contrast with the original $SL(2,C)$ symmetry of BFKL, uncovered by Fourier transforming into a coordinate \nrepresentation, is realized in the transverse momentum space. This new symmetry is closely analogous to the dual \nconformal symmetry observed in $\\mathcal{N}=4$ SYM for gluon scattering amplitudes, and we will see that it turns out \nto be broken by infrared effects just as in the four dimensional gauge theory.\n\nLet us now rewrite Eq.~\\eqref{eq:BFKLconK} in terms of dual variables. Taken as incoming, the external momenta are \n$\\boldsymbol {k}_A$, $-\\boldsymbol {k}_A+ \\boldsymbol {q}$, $\\boldsymbol {k}_B - \\boldsymbol {q}$ and $-\\boldsymbol {k}_B$ so, introducing the notation \n$x_{i,j}\\equiv x_i - x_j$, we define the new set of variables as\n\\begin{equation}\n\\boldsymbol {p}_1 = x_{1,2} = \\boldsymbol {k}_A, \\, \\boldsymbol {p}_2 = x_{2,3} = \\boldsymbol {q} -\\boldsymbol {k}_A, \\,\n\\boldsymbol {p}_3 = x_{3,4} = \\boldsymbol {k}_B - \\boldsymbol {q}, \\, \\boldsymbol {p}_4=x_{4,1} = -\\boldsymbol {k}_B. \n\\end{equation}\nEquivalently, we could have written $\\boldsymbol {k}_A = x_{1,2}, \\, \\boldsymbol {k}_B = x_{1,4}, \\, \n\\boldsymbol {q} = x_{1,3}$ with $x_1$ then being a simple shift of the origin for the external momenta.\n\nIn these new variables the gluon Regge trajectory is\n\\begin{equation}\n\\omega (\\boldsymbol {k}_A^2) = \\omega (x^2_{1,2})=-\\frac{g^2N_c}{16 \\pi ^3 }\\int d^2 x_I \\frac{x^2_{1,2}}{x^2_{I,1}x^2_{I,2}} \\ , \\label{eq:trajectoryreggegluonk1x}\n\\end{equation}\nwhere we have introduced $x_I$ through $\\boldsymbol {k}' = x_{I,2}$. Ignoring for the moment that this expression is divergent and thus ill-defined, we see that is has a formal two-dimensional conformal symmetry. It is formally invariant under translations, rotations and scalings of the $x_i$, and also under the conformal inversions $x_i \\rightarrow \\frac{x_i}{x_i^2}$, since they would imply\n\\begin{equation}\nd^2 x_I \\rightarrow \\frac{d^2 x_I}{x_I^4} \\ , \\;\\; x_{i,j}^2 \\rightarrow \\frac{x_{i,j}^2}{x_i^2x_j^2} \\ . \\label{eq:transfxij} \n\\end{equation}\nIn the same way $\\omega ((\\boldsymbol {k}_B-\\boldsymbol {q})^2) = \\omega (x^2_{2,3})$ is also formally conformally invariant. \nNow, given that the trajectory is infrared divergent one would expect this symmetry to be broken by the introduction \nof a regulator, an issue which is discussed in the next section.\n\nRewriting the full kernel~\\eqref{eq:Kernel} in terms of the $x_i$, with $\\boldsymbol {k}'=x_{1,I}$, we get\n\\begin{equation}\nK(x_{1,2},\\, x_{3,2};\\, x_{1,I},\\, x_{3,I}) = \n\\frac{K_R (x_{1,2},\\, x_{2,3}; x_{3,I}, \\, x_{I,1})}{8\\pi^3x_{1,2}^2x_{I,3}^2} \n+ \\left[ \\omega (x_{1,2}^2) + \\omega (x_{2,3}^2) \\right]\\delta ^{(2)}(x_{2,I}) \\ , \\label{eq:Kernelx}\n\\end{equation}\nwhere\n\\begin{equation}\nK_R (x_{1,2},\\, x_{2,3}; x_{3,I}, \\, x_{I,1}) = -N_cg^2\\left[ x_{1,3}^2 - \\frac{x_{2,3}^2 x_{I,1}^2}{x_{2,I}^2}- \\frac{x_{1,2}^2x_{I,3}^2}{x_{2,I}^2} \\right] \\ . \\label{eq:KernelRx}\n\\end{equation}\nUsing that $\\delta ^{(2)}(x_{2,I}) \\rightarrow x_2^2x_I^2 \\delta ^{(2)}(x_{2,I})$ under conformal inversions one then finds immediately that the kernel transforms covariantly \\footnote{This is again similar to the (conjectured) dual conformal symmetry of scattering amplitudes in ${\\cal N}=4$ SYM, under which the amplitudes transform covariantly, as opposed to the ordinary conformal symmetry which leave them invariant.}\n\\begin{equation}\nK(x_{1,2},\\, x_{3,2};\\, x_{1,I},\\, x_{3,I}) \\rightarrow x^2_2x^2_I K(x_{1,2},\\, x_{3,2};\\, x_{1,I},\\, x_{3,I}) \\ .\n\\end{equation}\nTogether with translations, rotations and dilatations this forms a dual $SL(2,C)$ symmetry, different from the one \npreviously known. More precisely, dilatations and rotations coincide with the original $SL(2,C)$-symmetry, while \ntranslations and inversions will be different. \n\nApplied to the BFKL equation~\\eqref{eq:BFKLconK} and using that the integration measure transforms according \nto~\\eqref{eq:transfxij} one finds that a factor of $\\frac{x_2^2}{x_I^2}$ is produced inside the integral. \nConsequently, if the Green function $F(\\omega , \\, x_{1,2},\\, x_{1,4},\\, x_{1,3})$ were to produce a factor of \n$x_2^2$ upon inversion, then its convolution with the kernel $K \\otimes F$, would transform in the same way as $F$ \nitself. Now, at lowest order, $F$ is simply given by the delta function, which indeed transforms in this way, \n$\\delta^{(2)} (\\boldsymbol {k}_A - \\boldsymbol {k}_B)=\\delta ^{(2)} (x_{2,4}) \\rightarrow x_2^2 x_4^2 \\delta ^{(2)} (x_{2,4})$. \nSince $F$ can be constructed through iterated convolution with the kernel, it follows that the Green function \nshould have the same conformal properties as the delta function. \n\nWe can obtain a formal expression for the Green function having the correct conformal properties by iteration. \nIntroducing the short-hand notation\n\\begin{equation}\n\\omega_0 \\left(\\bf{k}_A,\\, \\bf{q} \\right) \\equiv \\omega (\\boldsymbol {k}_A^2) + \\omega ((\\boldsymbol {k}_A-\\boldsymbol {q})^2), \n\\xi \\left(\\bf{k},\\, \\bf{k}_A,\\, \\bf{q}\\right) \\equiv \n\\frac{K_R (\\boldsymbol {k}_A,\\, \\boldsymbol {k}_A-\\boldsymbol {q}; -\\boldsymbol {k}+\\boldsymbol {q}, \\, -\\boldsymbol {k})}{8\\pi^3 \\boldsymbol {k}_A^2(\\boldsymbol {k} - \\boldsymbol {q})^2} \\ \n\\end{equation}\none finds (with $\\bf{k}_0 \\equiv \\bf{k}_A$):\n\\begin{eqnarray}\nF \\left(\\omega,\\bf{k}_A,\\bf{k}_B,\\bf{q}\\right) =\n{\\delta^{(2)} \\left(\\bf{k}_A-\\bf{k}_B\\right) + \\sum_{n=1}^\\infty \\prod_{i=1}^n \n\\int d^2 \\bf{k}_i \\, {\\xi \\left(\\bf{k}_i,\\bf{k}_{i-1},\\bf{q}\\right) \\over \n\\omega - \\omega_0 \\left(\\bf{k}_i,\\bf{q}\\right)} \\delta^{(2)}\n\\left(\\bf{k}_n-\\bf{k}_B\\right) \\over \\omega - \\omega_0 \\left(\\bf{k}_A,\\bf{q}\\right)}. \n\\end{eqnarray}\n\nRather than $\\omega$ it is more natural to use the rapidity difference, $Y$, between the external \nparticles as the evolution variable. To this end we perform the inverse Mellin transform\n\\begin{eqnarray}\n{\\cal F} \\left(\\bf{k}_A,\\bf{k}_B,\\bf{q},Y\\right) &=&\n\\int_{a-i \\infty}^{a+i \\infty} {d \\omega \\over 2 \\pi i} e^{\\omega Y}\nF \\left(\\omega,\\bf{k}_A,\\bf{k}_B,\\bf{q}\\right).\n\\end{eqnarray}\nThe formula $\\int_{a-i \\infty}^{a+i \\infty} {d \\omega \\over 2 \\pi i} e^{\\omega Y}\n\\prod_{i=0}^n \\frac{1}{\\omega-\\omega_i} = e^{\\omega_0 Y} \\prod_{i=1}^n\n\\int_0^{y_{i-1}} d y_i e^{\\omega_{i,i-1} y_i}$ for $n > 0$, with $\\omega_{i,j} \\equiv \n\\omega_i - \\omega_j, y_0 \\equiv Y$, is useful to obtain the final expression, written in dual $x$-variables:\n\\begin{equation}\n{\\cal F} \\left(x_{12}, x_{14},x_{13},Y\\right) = e^{\\omega _{2,1} Y} \\Bigg\\{\\delta^{(2)} \\left(x_{24}\\right)\n+ \\sum_{n=1}^\\infty \\prod_{i=1}^n \\int_0^{y_{i-1}} \\hspace{-0.3cm} d y_i\n\\int d^2 x_i \\, \\xi_{i,i-1} e^{\\omega_{i,i-1} y_i} \\delta^{(2)}\n\\left(x_{4,n} \\right) \\Bigg\\}, \n\\label{eq:Frapidityx}\n\\end{equation}\nwhere\n\\begin{eqnarray}\n\\omega_{i,i-1} &=& \\omega _0 \\left(x_{1,i},x_{13}\\right)-\\omega_0 \\left(x_{1,i-1},x_{13}\\right) \\ ,\\\\\n\\xi_{i,i-1} &=& {{\\bar \\alpha}_s \\over 2 \\pi} \\Bigg\\{{ x^2_{i,3} x^2_{i-1,1} +\nx^2_{i-1,3} x^2_{i,1} - x^2_{i-1,i} x^2_{13}\\over x^2_{i,3} x^2_{i-1,i} x^2_{i-1,1}}\\Bigg\\} \\ .\n\\end{eqnarray}\nThis representation preserves the transformation properties of the original equation. In the forward case, where the \nmomentum transfer is zero, the same structure remains with \n\\begin{equation}\n\\omega_{i,i-1} = 2\\left(\\omega \\left(x_{1,i}\\right)-\\omega \\left(x_{1,i-1}\\right)\\right) \\ ,\n\\xi_{i,i-1} = {{\\bar \\alpha}_s \\over \\pi} { 1 \\over x^2_{i-1,i}} \\ .\n\\end{equation}\nIn this case the solution also has a formal dual $SL(2,C)$ covariance. This should be contrasted with \nthe original $SL(2,C)$-invariance of the BFKL kernel, which does not appear in the forward case.\n\nBefore ending this section, it is noteworthy to mention that this formal dual $SL(2,C)$ covariance is \npresent in the same form for all color projections in the $t$-channel since they only differ by a different factor in front of $K_R$: with $N_c=3$, $c_1 = 1,\\, c_{8_a} = c_{8_s} = 1\/2,\\, c_{10} = c_{\\overline{10}} = 0$, etc.\n\n\\section{The effect of IR divergences}\n \nIn ${\\cal N}=4$ SYM infrared divergences break the dual conformal symmetry. For BFKL, such divergences cancel, opening the \npossibility that the dual $SL(2,C)$-symmetry remains exact. However, this turns out not to be the case. Perhaps the \nsimplest way to see this is by studying the forward case. If $F$ has the transformation properties of the delta \nfunction it can be written as\n\\begin{equation}\nF = F_1 \\delta ^{(2)}(\\boldsymbol {k}_A - \\boldsymbol {k}_B) + \\frac{1}{(\\boldsymbol {k}_A - \\boldsymbol {k}_B)^2}F_2 \\ ,\n\\end{equation}\nwhere $F_1$ and $F_2$ are dual conformally invariant, since $(\\boldsymbol {k}_A - \\boldsymbol {k}_B)^{-2}$ is the only other function \nthat transforms correctly. When $\\boldsymbol {q}=0$, $x_1=x_3$ and no non-trivial conformal invariant can be formed from the \nthree remaining $x_i$. $F_2$ can thus only be a function of $\\omega$ (or equivalently the rapidity $Y$), and the \ncoupling. But when forming physical quantities one integrates over $\\boldsymbol {k}_A$ and $\\boldsymbol {k}_B$ and the divergences \nat $\\boldsymbol {k}_A=\\boldsymbol {k}_B$ must cancel between $F_1$ and $F_2$. The factor $(\\boldsymbol {k}_A - \\boldsymbol {k}_B)^{-2}$ is singular \nenough to cancel one factor of the trajectory, but $F_1$ is obtained by repeated application of the trajectory \npart of the kernel so, starting from the second iteration, products of two or more trajectories will appear and \nthe divergences will fail to cancel.\n\nOne can also observe the breakdown of the dual $SL(2,C)$ symmetry directly by regularizing the integrals and \ncancelling the divergences explicitly when performing the iteration. One then finds that the first iteration \nrespects the symmetry, while the second iteration produces a contribution to $F_2$ (when $\\boldsymbol {q}=0$) proportional \nto an anomalous factor of the form $\\ln \\left( \\frac{(\\boldsymbol {k}_A-\\boldsymbol {k}_B)^4}{\\boldsymbol {k}_A^2 \\boldsymbol {k}_B^2}\\right)$, which \nbreaks the symmetry under inversions. The origin of this factor is the regularization of infrared divergences. \nFor example, using dimensional regularization with $D=4-2\\epsilon$\n\\begin{equation}\n\\omega (x^2_{1,2})=-\\frac{g^2N_c}{16 \\pi ^3 }(4 \\pi \\mu)^{2\\epsilon}\\int d^{2-2\\epsilon} x_I \\frac{x^2_{1,2}}{x^2_{I,1}x^2_{I,2}}\\approx -\\frac{g^2N_c}{8 \\pi ^2 }(4\\pi e^{-\\gamma})^\\epsilon \\left( \\ln \\frac{x^2_{1,2}}{\\mu ^2}- \\frac{1}{\\epsilon} \\right) \\ .\n\\end{equation}\nThe divergences will cancel between the trajectories and the real emission part of the kernel, but factors such \nas $\\ln x^2_{1,2}$ will add up giving a non-vanishing anomalous term. So, even though BFKL is infra-red finite, \na remnant of the divergences remains in the form of the breaking of the dual $SL(2,C)$ symmetry in the form given here.\n\nFurther insight can be gained by studying a standard representation of the Green function in the forward case, \nobtained by diagonalizing the BFKL kernel. It is\n\\begin{equation}\n{\\cal F} \\left(x_{1,2},x_{1,4},Y\\right) = \n\\sum_{n=-\\infty}^\\infty \\int \\frac{d \\gamma}{2 \\pi i}\n\\left({x^2_{1,2} \\over x^2_{1,4}}\\right)^{\\gamma-\\frac{1}{2}}\n\\frac{e^{{\\bar{\\alpha}_s}\\chi_n\\left(\\gamma\\right) Y + i n \\theta_{2,4}}}{\\pi\\sqrt{x_{1,2}^2 x_{1,4}^2}}\\ ,\n\\end{equation}\nwith $\\chi_n \\left(\\gamma\\right) = 2 \\Psi\\left(1\\right)- \\Psi\\left(\\gamma+\\frac{|n|}{2}\\right)\n- \\Psi\\left(1-\\gamma+\\frac{|n|}{2}\\right)$ and \n$\\cos{\\theta_{2,4}} = { x_{1,2} \\cdot x_{1,4} \\over \\sqrt{x^2_{1,2} x^2_{1,4}}}$.\nIn this representation any dependence on an IR cutoff has canceled explicitly, and one can check that the covariance under conformal inversions is lost.\n\nAn important issue is whether the dual $SL(2,C)$ symmetry is broken beyond repair or whether it can be deformed to take into consideration the anomalous terms. In a best case scenario the symmetry would obey to all orders a simple relation such as the anomalous Ward identity satisfied by the dual conformal symmetry of $\\mathcal{N}=4$ scattering amplitudes \\cite{confward}. This issue is studied in \\cite{johan}, with the result that the dual $SL(2,C)$ does not obey such a simple all-order relation, but can still be deformed so that it becomes exact, at least up to the order studied. The representation then becomes coupling-dependent, but encouragingly, it seems to do so in such a way that the algebra generated by the original and dual $SL(2,C)$ symmetries remains coupling-independent.\n\n\\section{$2\\rightarrow m$ reggeized gluon vertex}\n\n\nThe BFKL amplitude will violate bounds imposed by unitarity at sufficiently high energies. \nIn order to restore unitarity, one of the new elements that must be introduced is a vertex in which the number \nof reggeized gluons in the $t$-channel is not conserved. As shown in Fig.~\\ref{fig:vertex} we choose to write this \n$2\\rightarrow m$ vertex (see, for example, Eq.~(3.57) of \\cite{BartelsEwerz}) using a convenient assignment \nof the momentum indices\n\\begin{align}\n&K_{2\\rightarrow m}^{\\{ b \\} \\rightarrow \\{a \\}}(\\boldsymbol {p}_2,\\, \\boldsymbol {p}_3;\\, \\boldsymbol {p}_4,\\, \\ldots ,\\, \\boldsymbol {p}_{m+2},\\, \\boldsymbol {p}_1 ) = f_{a_1b_1c_1}f_{c_1a_2c_2}\\cdots f_{c_{m-1}a_mb_2} g^m \\nonumber \\\\\n& \\times \n\\left[ (\\boldsymbol {p}_4 + \\cdots + \\boldsymbol {p}_1)^2 - \\frac{\\boldsymbol {p}_3^2(\\boldsymbol {p}_5 + \\cdots + \\boldsymbol {p}_1)^2}{(\\boldsymbol {p}_3 + \\boldsymbol {p}_4)^2}\n - \\frac{\\boldsymbol {p}_2^2(\\boldsymbol {p}_4 + \\cdots + \\boldsymbol {p}_{m+2})^2}{(\\boldsymbol {p}_1 + \\boldsymbol {p}_2)^2} + \\frac{\\boldsymbol {p}_1^2\\boldsymbol {p}_3^2(\\boldsymbol {p}_5 + \\cdots + \\boldsymbol {p}_{m+2})^2}{(\\boldsymbol {p}_1 + \\boldsymbol {p}_2)^2(\\boldsymbol {p}_3 + \\boldsymbol {p}_4)^2} \\right] \\label{eq:K2m} \\ ,\n\\end{align}\nwhere the $a_1,\\, b_1$ etc. are the color indices of the reggeized gluons and $f_{ijk}$ the structure constants of \n$SU(N_c)$. \n\n\\begin{figure}[ht]\n\\psfrag{x1}{$x_1$} \\psfrag{x2}{$x_2$} \\psfrag{x3}{$x_3$} \\psfrag{x4}{$x_4$} \\psfrag{x5}{$x_5$} \\psfrag{p1}{$\\boldsymbol {p}_1$} \\psfrag{p2}{$\\boldsymbol {p}_2$} \\psfrag{p3}{$\\boldsymbol {p}_3$} \\psfrag{p4}{$\\boldsymbol {p}_4$} \\psfrag{p5}{$\\boldsymbol {p}_5$} \\psfrag{pmp2}{$\\boldsymbol {p}_{m+2}$} \\psfrag{a1}{$a_1$} \\psfrag{a2}{$a_2$} \\psfrag{amm1}{$a_{m-1}$} \\psfrag{am}{$a_m$} \\psfrag{b1}{$b_1$} \\psfrag{b2}{$b_2$}\n\\begin{center}\n\\includegraphics{reggluonvertex.eps}\n\\caption{\\small The $2\\rightarrow m$ reggeized gluon vertex. All momenta are taken as ingoing.} \\label{fig:vertex}\n\\end{center}\n\\end{figure}\n\nWritten in terms of $x$ variables this becomes\n\\begin{align}\n&K_{2\\rightarrow m}^{\\{ b \\} \\rightarrow \\{a \\}}(x_{23},\\, x_{34};\\, x_{45},\\, \\ldots ,\\, x_{m+2,1},\\, x_{12}) = \\nonumber \\\\ &f_{a_1b_1c_1}f_{c_1a_2c_2}\\cdots f_{c_{m-1}a_mb_2} g^m \\left[ x_{24}^2 - \\frac{x_{34}^2x_{25}^2}{x_{35}^2} -\\frac{x_{23}^2x_{14}^2}{x_{13}^2} +\\frac{x_{23}^2x_{34}^2x_{15}^2}{x_{13}^2 x_{35}^2} \\right] \\ ,\n\\end{align}\nand is manifestly conformally covariant. The assignment of the momenta in \\eqref{eq:K2m} was chosen so that \nthe vertex takes a form independent of $m$ when written in terms of the $x_i$. Note that the last term vanishes \nwhen $m=2$ since then $x_1 = x_5$, and one recovers the corresponding term in the BFKL kernel. \n\nIn \\cite{reggevertex2a4} it was shown that the $2\\rightarrow 4$ reggeized gluon vertex exhibited the same coordinate representation $SL(2,C)$-invariance as the BFKL equation. This was taken to indicate that a unitary, two-dimensional CFT describing scattering amplitudes in the Regge limit should exhibit this $SL(2,C)$-invariance. Our results would seem to indicate that such a theory should also be covariant under the dual $SL(2,C)$.\n\n\\section{Conclusions}\n\n\nWe have shown that not only does the LLA BFKL kernel, and its extension in the form of the $2 \\rightarrow m$ reggeized gluon vertex, exhibit the ordinary $SL(2,C)$-symmetry, found by Lipatov but also a dual $SL(2,C)$, analogous to the dual conformal symmetry of $\\mathcal{N}=4$. It is tempting to interpret these symmetries as reductions to the transverse plane of the conformal and dual conformal symmetries of the supersymmetric theory, although it is not clear exactly how such a reduction should be carried out. Purely transverse versions of the conformal algebras are not symmetries of the 4-dimensional gauge theory amplitudes, but seem to emerge in the Regge limit. \n\nAlso, the dual invariance of the reggeized gluon vertex suggests that a unitary two-dimensional CFT describing high-energy gauge theory should have both $SL(2,C)$ groups. In future work, having identified the dual $SL(2,C)$ one can try to understand the origin of the integrability of the Regge limit in terms of the integrability of $N=4$ SYM.\n\n\n\n\\vspace{5mm}\n\\centerline{\\bf Acknowledgments}\n\nWe would like to thank Lev Lipatov for useful discussions. The work of C. G. has been partially\nsupported by the Spanish DGI contract FPA2003-02877 and the CAM grant HEPHACOS\nP-ESP-00346. The work of J. G. is supported by a Spanish FPU grant.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn this paper, we are concerned with the $L^p$ mapping properties of \nthe pseudodifferential operators in the form \n\\begin{equation}\n\\label{eq:1}\nT_\\sigma f(x)=\\int\\limits_{{\\mathbf R}^n} \\sigma(x,\\xi) e^{2\\pi i \\xi x} \\hat{f}(\\xi) d\\xi.\n\\end{equation} \nThe operators $T_\\sigma$ have been subject of continuous interest \nsince the sixties. We should mention that their usefullness in \n the study of partial differential equations have been realized much \nearlier, but it seems that their systematic study began with the \nfundamental works of Kohn and Nirenberg, \\cite{KN} and H\\\"ormander, \n\\cite{Hor}. \n\nTo describe the \nresults obtained in these early papers, \ndefine the H\\\"ormander's class $S^m$, which consists \nof all functions $\\sigma(x,\\xi)$, so that \n\\begin{equation}\n\\label{eq:2}\n|D^\\beta_x D^\\alpha_\\xi \\sigma(x, \\xi)|\\leq C_{\\alpha, \\beta} (1+|\\xi|)^{m-|\\alpha|}.\n\\end{equation}\nfor all multiindices $\\alpha, \\beta$. \nA classical theorem in \\cite{Hor} then states that \n$Op(\\sigma):H^{s+m,p}\\to H^{s, p}$ for all $s\\geq 0$ and \n$1^{|\\alpha|} \n|D_\\xi^\\alpha [\\sigma(x+y, \\xi)-\\sigma(x, \\xi)]| \\leq C_\\alpha \\omega(|y|)\n$$\nand assume that $\\sum_{j>0} \\omega(2^{-j})^2<\\infty$. Then for \nall $10} \nC^\\gamma S^0\\subset C^\\omega S^{0}_{1,0}$. \nRelated results can be found in the work of M. Taylor, \\cite{Taylor} (see Proposition 2.4, p. 23) and J. Marschall, \\cite{Marschall} where the \nspaces $C^\\omega$ are replaced by $H^{\\varepsilon,p}$ spaces with $p$ as large as one wish and $0<\\varepsilon=\\varepsilon(p)<<1$ (see also \\cite{Taylor}, p. 61)\n\nOne of the purposes of this work is to get away from the \ncontinuity requirements on $x\\to \\sigma(x, \\xi)$. Even more importantly, \nwe would like to replace the pointwise conditions on the derivatives of $\\xi$ \nby averaged ones. This particular point has not been thoroughly \nexplored appropriately \n in the literature in the author's opinion, \nsee Theorem \\ref{theo:1} below. \n\nOn the other hand, a particular motivation for such considerations is \nprovided by the recent papers of Rodnianski-Tao \\cite{RT} and \nthe author \\cite{Stefanov}, where concrete parametrices \n(i.e. pseudodifferential operators, representing \napproximate solutions to certain PDE's) \nwere constructed for the \nsolutions of certain first order perturbation of the wave \nand Schr\\\"odinger equations. \nA very quick inspection of these examples shows that\\footnote{Most \nreaders are \n likely to have their own fairly long list \nwith favorite examples, for which \n the H\\\"ormander condition fails.} \n {\\it \nthey do not obey\npointwise conditions on the derivatives on the \nsymbols} and thus, these methods fail \nto imply $L^2$ bounds for \nthese (and related problems). Moreover, one often times has to deal with the \nsituation, where the maps $\\xi\\to \\sigma(x, \\xi)$ \nare not smooth in a pointwise sense. \nOn the other hand, one may still be able to control averaged quantities like \n\\begin{equation}\n\\label{eq:5}\n\\sup_x \\norm{\\sigma(x, \\xi)}{H^{n\/2}_\\xi}<\\infty.\n\\end{equation}\nThis will be our treshold condition for $L^2$ boundedness, \nwhich we try to achieve. \\\\ Heuristically at least, \\eqref{eq:5} \nmust be ``enough'' in some sense, \n since if we had simple symbols like $\\sigma(x,\\xi)=\\sigma_1(x) \\sigma_2(\\xi)$, \nthen the $L^2$ boundedness of $Op(\\sigma)$ is equivalent to \n $\\norm{\\sigma_1}{L^\\infty_x}<\\infty, \n\\norm{\\sigma_2}{L^\\infty_\\xi}<\\infty$. Clearly, \n$\\norm{\\sigma_2}{L^\\infty_\\xi({\\mathbf R}^n)}$ just fails to be controlled by \n\\eqref{eq:5}, but on the other hand, the quanitity in \\eqref{eq:5} \nis controlled by the \nappropriate Besov space $B^{n\/2}_{2,1}$ norm. \n\n A final motivation for \nthe current study is to achieve a scale\n invariant condition, which gives an estimate of the \n$L^2\\to L^2$ ($L^p\\to L^p$) norm of \n$Op(\\sigma)$ in terms of a {\\it scale invariant quantity}, \nthat is, we aim at showing \nan estimate, \n$$\n\\norm{Op(\\sigma)}{L^p\\to L^p}\\leq C \\norm{\\sigma}{Y} \\norm{f}{L^p}, \n$$\nwhere for every $\\lambda\\neq 0$, one has $\\norm{\\sigma(\\lambda \\cdot,\\lambda^{-1} \n\\cdot) }{Y}=\\norm{\\sigma}{Y}$. \n\nIn that regard, note that the condition (which is one of the requirements of \n the H\\\"ormander class $S^0$) \n\\begin{equation}\n\\label{eq:6}\n\\sup_{x}| D_\\xi^\\alpha \\sigma(x, \\xi)|\\leq C_\\alpha |\\xi|^{-|\\alpha|}\n\\end{equation}\nis scale invariant in the sense described above. Moreover, \nby the standard Calder\\'on-Zygmund theory (see \\cite{Stein}), \nthe pointwise condition \\eqref{eq:6} together with \n$\\|T_\\sigma\\|_{L^2\\to L^2}<\\infty$ implies \n$$\nT_\\sigma f(x)=\\int K(x, x-y) f(y) dy,\n$$\nwhere $K(x,\\cdot)$ satisfies the H\\\"ormander-Mihlin conditions, namely $|K(x, z)|\\leq C|z|^{-n}$ and \n$|\\nabla_z K(x,z)|\\leq C|z|^{-n-1}$, where the constant $C$ depends on \nthe constants \n$C_\\alpha: |\\alpha|<[n\/2]+1$ in \\eqref{eq:6}. \nThis in turn is enough to conclude \nthat $T_\\sigma:L^p \\to L^p$ for all $12$, \nthere exists $\\sigma(x,\\xi)$ so that $\\sup_{x} | D_\\xi^{\\alpha}\\sigma(x, \\xi)|\\leq C_\\alpha |\\xi|^{-|\\alpha|}$ and $\\sup_x \\norm{\\sigma(x, \\cdot)}{W^{p, n\/p}}<\\infty$, \nbut $T_\\sigma$ fails to be bounded on $L^2({\\mathbf R}^n)$. \n\\end{theorem}\n{\\bf Remark:} \n\\begin{enumerate} \n\\item Note that the estimate on $T_\\sigma$ is scale invariant. \n\\item The sharpness claim of the theorem, roughly speaking, \nshows that in the scale of spaces\\footnote{Note that \nthese spaces scale the same and moreover \nby Sobolev embedding these are strictly decreasing sequence, at least \nfor $2\\leq p<\\infty$.} $W^{p, n\/p}$, $\\infty\\geq p\\geq 2$, one may \nnot require anything less than $W^{2,n\/2}=H^{n\/2}$ of the symbol in \norder to ensure $L^2$ boundedness.\n\\item The counterexample to which we refer in Theorem \\ref{theo:1} \nis a simple variation of the well-known example of $\\sigma\\in \nS^{0}_{1,1}$, the ``forbidden class'', \n which fails to be $L^2$ bounded, see \\cite{Stein}, p. 272 and Section \n\\ref{sec:counter} below. \n \\end{enumerate}\nOur next result concerns $L^p$ boundedness for $T_\\sigma$. \n\\begin{theorem}($L^p$ bounds) \n\\label{theo:3}\nFor the pseudodifferential operator $T_\\sigma$ there \nis the estimate for all $21$, there exists a homogeneous of degree zero symbol \n$\\sigma(x, \\xi):\\mathbf R^2\\times \\mathbf R^2\\to \\mathbf R^1$, \nso that $\\sup_{x, \\xi} |\\sigma(x, \\xi)|<\\infty$ and \n$\\sup_x \\norm{\\sigma(x,\\xi)}{W^{1, 1}(\\mathbf S^1)}<\\infty$, \nand so that $\\norm{T_\\sigma}{L^2\\to L^2}>N$. \n\\end{proposition}\nThe counterexample considered here is a smoothed out version of the \nmaximal directional Hilbert transform in the plane $H_* f(x)=\\sup_{u\\in \\mathbf S^1} \n|H_u f(x)|$. We mention the spectacular recent result of \nLacey and Li, \\cite{LL} showing the boundedness of \n$H_*$ on $L^p(\\mathbf R^2): 20$, define \n$$\n\\mathcal C_\\delta f(x)=\\sup_{u>0} \\int_{\\mathbf R^1} (1-\\xi^2\/u^2)^{\\delta}_+ \ne^{2\\pi i \\xi x} \\hat{f}(\\xi) d\\xi.\n$$ \nClearly, as a limit as $\\delta\\to 0$, we get the Carleson's operator. \nUnfortunately, one cannot conclude that \n$\\sup_{\\delta>0}\\|C_\\delta\\|_{L^p}<\\infty$, for\nthat would imply the famous Carleson-Hunt theorem. \nOn the other hand, define the maximal ''thin\ninterval operator''\n$$\nT_m f(x)=\\sup_{u>0} \\int_{\\mathbf R^1} \\varphi(2^m(1-\\xi^2\/u^2)) \ne^{2\\pi i \\xi x} \\hat{f}(\\xi) d\\xi.\n$$\nA \nsimple argument based on (the proof of) Theorem \\ref{theo:3} yields \n\\begin{proposition}\n\\label{prop:Carl}\nFor any $\\varepsilon>0, 10} \\int_{{\\mathbf R}^n} (1-|\\xi|^2\/u^2)^{\\delta}_+ \ne^{2\\pi i \\xi x} \\hat{f}(\\xi) d\\xi.\n$$\n{\\it in any dimension}. \n\\end{itemize}\n\\begin{proof}\nIt clearly suffices to show the pointiwise estimate \n$|T_m f(x)|\\leq C M (\\sup_k P_k f)(x)$ for any $k$.\nThe statements about $B^0_{p,1}\\to L^p$ bounds follow by elementary Littlewood-Paley theory and\nthe $l^p$ bounds for the Hardy-Littlewood maximal function. The restricted-to-weak estimate \n$F^0_{1,\\infty}\\to L^{1,\\infty}$ for $C_\\delta$ follows by summing an \n{\\it exponentially decaying} series in\nthe quasi-Banach space $L^{1, \\infty}$. \n\nBy support considerations, it is clear that \n\\begin{eqnarray*}\nT_m f(x) & = &\\sum\\limits_k \\sup_{u>0} \\int_{\\mathbf R^1} \\varphi(2^m(1-\\xi^2\/u^2)) \ne^{2\\pi i \\xi x}\\varphi(2^{-k}\\xi) \\hat{f}(\\xi) d\\xi= \\\\\n&=&\n\\sum\\limits_k \\sup_{u\\in (2^{k-2}, 2^{k+2})} \\int_{\\mathbf R^1} \\varphi(2^m(1-\\xi^2\/u^2)) \ne^{2\\pi i \\xi x}\\varphi(2^{-k}\\xi) \\hat{f}(\\xi) d\\xi= \\\\\n&=& \\sum\\limits_k T_{m, u(\\cdot)\\in (2^{k-2}, 2^{k+2})} f_k.\n\\end{eqnarray*}\nClearly, the requirement $u\\in (2^{k-2}, 2^{k+2})$ creates (almost) \ndisjointness in the $x$ support, whence \n\\begin{equation}\n\\label{eq:819}\n|T_m f(x)|\\leq C \\sup\\limits_k |T_{m, u(\\cdot)\\in (2^{k-2}, 2^{k+2})} f_k(x)|.\n\\end{equation}\nOur basic claim is that \n\\begin{equation}\n\\label{eq:820}\n|T_{m, u(\\cdot)\\in (2^{k-2}, 2^{k+2})} f_k(x)|\\leq C M(f_k).\n\\end{equation}\nClearly \\eqref{eq:819} and \\eqref{eq:820} imply $\\sup_m |T_m f(x)|\\leq C M(\\sup\\limits_k |f_k|)$, \nwhence the Proposition \\ref{prop:Carl}. \\\\\nBy scale invariance, \\eqref{eq:820} reduces to the case $k=0$, that is we need to show \n$$\n|T_{m, u(x)\\in (1\/4, 4)} P_0 f(x)|\\leq C M (P_0 f)(x).\n$$\nfor any Schwartz function $f$ and any \n$m>>1$. By \\eqref{eq:32} (in the proof of Theorem \\ref{theo:3} below), it will suffice to\nshow \n\\begin{equation}\n\\label{eq:920}\n\\sum\\limits_l 2^l \\sup\\limits_x \\|P_l^\\xi [\\varphi(2^m(1-\\xi^2\/u(x)^2)) \\varphi(\\xi)]\\|_{L^1(\\mathbf R^1)}\\lesssim 1.\n\\end{equation}\nfor any measurable function $u$, which takes its values in $(1\/4, 4)$. \\\\\nFor \\eqref{eq:920}, we have \n\\begin{eqnarray*}\n& & \\sum\\limits_{l0\\}}\ne^{2\\pi i \\xi x} \\hat{f}(\\xi) d\\xi$, which is closely related to the maximal\ndirectional Hilbert transform \n$$\nH_* f(x)=\\sup_{u}|H_u f(x)|=\\sup_u |\\int sgn(u, \\xi) \ne^{2\\pi i \\xi x} \\hat{f}(\\xi) \\varphi(\\xi)d\\xi|.\n$$\n$H_*$ was of course shown to be $L^p(\\mathbf R^2), p>2$ bounded by \nLacey and Li, \\cite{LL} by very sophisticated time-frequency analysis methods. \n\\begin{proposition}\n\\label{prop:mdh}\nFor the ``thin big circle'' multiplier \n$$\nT_m f(x)= \\sup_{u\\in \\mathbf S^{n-1}} |\\int_{{\\mathbf R}^n} \\varphi(2^m\\dpr{u}{\\xi\/|\\xi|}) \ne^{2\\pi i \\xi x} \\hat{f}(\\xi) \\varphi(\\xi)d\\xi|.\n$$\nwe have \n\\begin{equation}\n\\label{eq:kak}\n\\|T_m f\\|_{L^2\\to L^2}\\leq C_\\varepsilon 2^{m(n\/2-1)}\n\\end{equation}\nIn particular \n$$\n\\|H^*_\\delta\\|_{L^2({\\mathbf R}^n)\\to L^2({\\mathbf R}^n)}\\leq C_{p, \\varepsilon, \\delta} 2^{n\/2-1}.\n$$\n\\end{proposition}\n{\\bf Remark:} \n\\begin{itemize}\n\\item We believe that the operator $T_m$ ($m>>1$) \nhas a particular connection to the \nKakeya maximal function and the corresponding Kakeya problem. \nIndeed, the kernel\nof the corresponding singular integral behaves like a \n($L^1$ normalized) characteristic function of a rectangle with long side along $u$ of length $2^m$\nand $(n-1)$ short sides of length $1$ in the transverse directions! \n\\item In relation to that, one expects the conjectured Kakeya bounds \n$$\n\\|T_m f\\|_{L^p\\to L^p}\\leq C_\\varepsilon 2^{m(n\/p-1)}\n$$\nfor $p\\leq n$ \nto hold, while one only gets \n$$\n\\|T_m f\\|_{L^p\\to L^p}\\leq C_\\varepsilon 2^{m(n\/p-2\/p)}\n$$\nas a consequence of Theorem \\ref{theo:4}. Nevertheless, the two match when \n$p=2$. So it seems that \\eqref{eq:kak}, at\nleast in principle, captures the Kakeya conjecture for \n$p=2$ in general and in\nparticular the full Kakeya conjecture in two dimensions. \n\nSince our estimates do not seem to contribute much toward the resolution of any\nnew Kakeya estimates, we do not pursue here the exact relationship between $T_m$\nand the Kakeya maximal operator, although from our heuristic arguments above it\nshould be clear that it is a close one. \n\\end{itemize}\n\\begin{proof}\nWe proceed as in the proof of Proposition \\ref{prop:Carl}. We need only show \n\\begin{equation}\n\\label{eq:990}\n\\sum\\limits_{l} 2^{l(n-1)\/2} \\sup\\limits_x \\|P_l^{\\xi\/|\\xi|} \n\\varphi(2^m\\dpr{u(x)}{\\xi\/|\\xi|})\\|_{L^2(\\mathbf S^{n-1})}\\lesssim 1.\n\\end{equation}\nWe have \n\\begin{eqnarray*}\n& & \\sum\\limits_{l0} t^{-n} |f*\\Phi(t^{-1} \\cdot)(x)|\\leq C \\|\\Phi\\|_{L^1} Mf(x),\n\\end{equation}\nfor a radially dominated function $\\Phi$. \nFor integer values of $s$, we may define $W^{p,s}$ to be \nthe Sobolev space with $s$ derivatives in $L^p$, $1\\leq p\\leq \\infty$, with the corresponding norm \n$$\n\\norm{f}{W^{p,s}}:=\\sum\\limits_{|\\alpha|\\leq s} \\norm{D^\\alpha_x f}{L^p}. \n$$\nEquivalently, and for noninteger values of $s$, define \n$$\n\\norm{f}{W^{p,s}}:=\\norm{f}{L^p}+ \n\\|(\\sum\\limits_{l=0}^\\infty 2^{2l s}|P_l f|^2)^{1\/2}\\|_{L^p}\n$$\nand its homogeneous analogue \n$$\n\\norm{f}{\\dot{W}^{p,s}}:=\n\\|(\\sum\\limits_{l=-\\infty}^\\infty 2^{2l s}|P_l f|^2)^{1\/2}\\|_{L^p}.\n$$\nNote $W^{p,s}=\\dot{W}^{p,s}\\cap L^p$. \\\\\nThe (homogeneous) Besov spaces $\\dot{B}_{p,q}^s$, \nwhich scale like $\\dot{W}^{p,s}$, are defined as follows \n$$\n\\norm{f}{\\dot{B}_{p,q}^s}:= \n(\\sum\\limits_{l\\in \\mathcal Z} 2^{l sq }\\|P_l f\\|_{L^p}^q)^{1\/q}.\n$$\nThe Triebel-Lizorkin spaces are defined via \n$$\n\\norm{f}{\\dot{F}_{p,q}^s}:= \n\\|(\\sum\\limits_{l\\in \\mathcal Z} 2^{l sq }|P_l f|^q)^{1\/q}\\|_{L^p}.\n$$\n\n\\subsection{Fourier analysis on $\\mathbf S^{n-1}$} In this section, we define the Sobolev and Besov spaces for functions $q$ defined on $\\mathbf S^{n-1}$. For that, the standard approach is to fix the basis of the spherical harmonics and define the Littlewood-Paley operators by projecting over the corresponding set of the harmonics within the fixed frequency. \n\nIntroduce the angular differentiation operators $\\Omega_{i j}=x_j \\partial_i - x_i \\partial_j$. It is well-known that $\\{\\Omega_{ i j}\\}_{i\\neq j}$ generate the algebra of all differential operators, acting on $C^\\infty(\\mathbf S^{n-1})$. The spherical Laplacian is defined via \n$$\n\\Delta_{sph}=\\sum\\limits_{i< j} \\Omega_{i j}^2.\n$$\nThe spherical harmonics $\\{Y^n_{l,k}\\}_{k\\in A^n_l}$ \n are eigenfunctions of $\\Delta_{sph}$, so that $\\Delta_{sph} Y^n_{l,k}=-l(n-2+l) Y^n_{l,k}$, where $l\\geq 0$, $k$ varies in a finite set $A_l^n$. \n An equivalent way to define them is to take all the \nhomogeneous of degree $l$ polynomials that are solutions to \n \\begin{equation}\n \\label{eq:45}\n (\\partial_r^2+ r^{-1} \\partial_r + r^{-2} \\Delta_{sph})Y^l=0\n \\end{equation}\n Iy turns out that \\eqref{eq:45} \nhas $\\left(\\begin{array}{c} n+l-1 \\\\ l\\end{array}\\right)$ \nlinearly independent solutions \n $\\{Y^n_{l,k}\\}_{k\\in A^n_l}$. \nAnother important property of the family $\\{Y^n_{l,k}\\}$ \n is that it forms \n an orthonormal basis for $L^2(\\mathbf S^{n-1})$. \n \n Let $f:\\mathbf S^{n-1}\\to \\mathcal C$ be a smooth function. One can then define \nthe expansion in spherical harmonics in the usual way \n$$\nf(\\theta)=\\sum\\limits_{l,k\\in A_l^n} c_{l,k}^n Y^n_{l,k}(\\theta), \n$$\nwhere $c_{l,k}^n=\\dpr{f}{Y^n_{l,k}}_{L^2(\\mathbf S^{n-1})}$. \n The Littlewood-Paley operators may be defined via \n $$\n P_m^{\\xi\/|\\xi|} f = \\sum\\limits_{l,k\\in A_l^n} c_{l,k}^n\\varphi(2^{-m} l) \nY^n_{l,k}(\\theta), \n $$\n and there is the equivalence for all\\footnote{The constant \nof equivalence here depends only on $p$ and the cutoff \nfunction $\\varphi$.} $10} \\f{D_\\xi^\\alpha q(\\theta_0)}{\\alpha!}(\\theta-\\theta_0)^\\alpha.\n\\end{equation}\nHere, $D_\\xi^\\alpha q(\\theta_0)$ should be understood as taking $\\alpha$ derivatives of \nthe corresponding homogeneous polynomial and evaluating at $\\theta_0$. \nThe following lemma is standard, but since we need a specific dependence of our \nestimates upon the parameter $\\alpha$, we state it here for completeness. \n\\begin{lemma}\n\\label{le:os}\nLet $q:\\mathbf S^{n-1}\\to \\mathcal C$ and $q_m=P^{\\xi\/|\\xi|}_m q$. Then, there is a constant $C_n$, so that \nfor every $1\\leq p\\leq \\infty$, there is a the estimate \n$$\n\\|D^\\alpha_\\xi q_m\\|_{L^p(\\mathbf S^{n-1})}\\leq C_n^{|\\alpha|} 2^{m |\\alpha|} \\norm{q_m}{L^p(\\mathbf S^{n-1})}.\n$$\n\\end{lemma}\nThe proof of Lemma \\ref{le:os} is standard. One way to proceed is to note that if we\nextend the function $q_m$ off $\\mathbf S^{n-1}$ to some annulus, say via \n$Q_m(\\xi)=\\varphi(|\\xi|) q_m(\\xi\/|\\xi|)$, and \nthen \n$$\n \\|D^\\alpha_\\xi q_m\\|_{L^p(\\mathbf S^{n-1})}\\lesssim \\|D^\\alpha_\\xi Q_m \\|_{L^p({\\mathbf R}^n)}\n$$\n\n\n\n\n\\section{$L^p$ estimates for PDO with rough symbols}\nWe start with the $L^2$ estimate to illustrate the main ideas in the proof. \n\\subsection{$L^2$ estimates: Proof of Theorem \\ref{theo:1}}\nOur first remark is that we will for convenience consider only \nreal-valued symbols $\\sigma$, since of course the general \ncase follows from splitting into a real and imaginary part. \n\nTo show $L^2$ estimates for $T_\\sigma$, it is equivalent to show $L^2$ \nestimates for the adjoint operator, which takes the \nform\\footnote{ There is the small technical problem \nthat the $\\xi$ integral does not converge \nabsolutely. \nThis can be resolved by judicious placement of cutoffs \n$\\chi(\\xi\/N)$, after which, one may subsume that \npart in $\\hat{f}(\\xi)$. In the end \nwe let $N\\to \\infty$ and all the estimates will be \nindependent of the cutoff constant $N$.} \n$$\nT_\\sigma^* g(x)=\\int e^{2\\pi i \\xi\\cdot x} (\\int e^{-2\\pi i \\xi\\cdot y} \n[g(y) \\sigma(y, \\xi)] dy )d\\xi.\n$$\nOur next task is to decompose $T_\\sigma^* g$ and we start by \ntaking a Littlewood-Paley partition of unity in the $\\xi$ variable for $g$. We have \n$$\nT_\\sigma^* g (x)=\\sum\\limits_{l\\in \\mathcal Z} \\int e^{2\\pi i \\xi\\cdot x} \n(\\int e^{-2\\pi i \\xi\\cdot y} \n[ g(y) P_l^\\xi \\sigma(y, \\xi)] dy )d\\xi\n$$ \nNow that the function $g$ is frequency localized at frequency $2^l$, \nwe introduce further decomposition in the \n$\\xi$ integration. \n\nFor the $L^2$ estimates, because of the orthogonality, \nwe only need rough partitions, so for each fixed $l$, \ntake a tiling of ${\\mathbf R}^n$ composed of cubes $\\{Q\\}$ \nwith diameter $2^{-l}$. Denote the characteristic functions of $Q$ by $\\chi_Q$. We have \n$$\nT_\\sigma^* g (x)=\\sum\\limits_{l\\in \\mathcal Z} \\sum\\limits_{Q:d(Q)=2^{-l}} \n\\int e^{2\\pi i \\xi\\cdot x} \\chi_Q(\\xi)\n(\\int e^{-2\\pi i \\xi\\cdot y} \n[ g(y) P_l^\\xi \\sigma(y, \\xi)] dy )d\\xi\n$$\nThe main point of our next decompositions is that the \nfunction $P_l^\\xi \\sigma$ is essentially constant in $\\xi$ \nover any fixed cube $Q$. We exploit that by observing that \n$\\xi\\to P_l^\\xi\\sigma(x, \\xi)$ is an entire function and there is \n the expansion \n$$\nP_l^\\xi \\sigma(y, \\xi) \\chi_Q(\\xi)=[P_l^\\xi \\sigma(y, \\xi_Q)+\\sum\\limits_{\\alpha: |\\alpha|>0}^\\infty\n\\f{D_\\xi^\\alpha P_l^\\xi \\sigma(y, \\xi_Q)}{\\alpha!} (\\xi-\\xi_Q)^\\alpha]\\chi_Q(\\xi)\n$$\nfor any fixed $y$ and for any $\\xi_Q\\in Q$. Note that \n$D_\\xi^\\alpha P_l^\\xi \\sigma(y, \\xi_Q)\\sim 2^{l|\\alpha|} P_l^\\xi \\sigma(y, \\xi_Q)$ and \n$ |(\\xi-\\xi_Q)^\\alpha|\\lesssim 2^{-l|\\alpha|}$, by support consideration \n(recall $d(Q)=2^{-l}$). On a heuristic level, by the presence of $\\alpha!$, \none should think that the series \nabove behave like $P_l^\\xi \\sigma(y, \\xi_Q)$ plus exponential tail. \n\nGoing back to $D_\\xi^\\alpha P_l^\\xi$, as we have mentioned in Section \\ref{sec:prelim}, we can write \n$D_\\xi^\\alpha P_l^\\xi=2^{l|\\alpha|} P_{l, \\alpha}^\\xi$, where \n$P_{l, \\alpha}^\\xi$ is given by the multiplier $\\varphi(2^{-l}\\xi) (2^{-l} \\xi)^\\alpha$. \nIt is clear that \n$\\|P_{l, \\alpha}^\\xi f\\|_{L^2({\\mathbf R}^n)}\\leq C_n^{|\\alpha|}\\|P_l^\\xi f\\|_{L^2({\\mathbf R}^n)}$. \n\nThus, we have arrived at \n\\begin{eqnarray*}\n& & T_\\sigma^* g (x)=\\sum\\limits_{|\\alpha|\\geq 0} \\sum\\limits_{l\\in \\mathcal Z} \\sum\\limits_{Q:d(Q)=2^{-l}} \n\\int e^{2\\pi i \\xi\\cdot x} \\chi_Q(\\xi) \n\\f{2^{l|\\alpha|}(\\xi-\\xi_Q)^\\alpha}{\\alpha!} \\times \\\\\n& & \\times \n(\\int e^{-2\\pi i \\xi\\cdot y} \n[ g(y) P_{l,\\alpha}^\\xi \\sigma(y, \\xi_Q)] dy )d\\xi= \n\\sum\\limits_{l,\\alpha}(\\alpha!)^{-1} \\sum\\limits_{l, Q:d(Q)=2^{-l}} P_{Q, l, \\alpha} [ g(\\cdot) \nP_{l,\\alpha}^\\xi \\sigma(\\cdot, \\xi_Q)], \n\\end{eqnarray*}\nwhere $P_{Q, l, \\alpha}$ acts via $\\widehat{P_{Q, l, \\alpha} f}(\\xi)= \\chi_Q(\\xi) \n2^{l|\\alpha|}(\\xi-\\xi_Q)^\\alpha \\hat{f}(\\xi)$. Note \n$$\n\\|P_{Q, l, \\alpha}\\|_{L^2\\to L^2}=\\sup_\\xi | \\chi_Q(\\xi) \n2^{l|\\alpha|}(\\xi-\\xi_Q)^\\alpha|\\leq 1. \n$$\n\nFor fixed $l, \\alpha$, take $L^2$ norm. Using the orthogonality of \n$P_{Q, l, \\alpha}$ and its boundedness on $L^2$, we obtain \n\\begin{eqnarray*}\n& & \\|\\sum\\limits_{Q:d(Q)=2^{-l}} P_{Q, l, \\alpha} [ g(\\cdot) \nP_{l,\\alpha}^\\xi \\sigma(\\cdot, \\xi_Q)]\\|_{L^2}^2= \n\\sum\\limits_{Q:d(Q)=2^{-l}} \\|P_{Q, l, \\alpha} [ g(\\cdot) \nP_{l,\\alpha}^\\xi \\sigma(\\cdot, \\xi_Q)]\\|_{L^2}^2\\leq \\\\\n& & \\leq \\sum\\limits_{Q:d(Q)=2^{-l}} \\|g(\\cdot) \nP_{l,\\alpha}^\\xi \\sigma(\\cdot, \\xi_Q)]\\|_{L^2}^2=\\int |g(y)|^2 \n(\\sum\\limits_{Q} |P_{l,\\alpha}^\\xi \\sigma(y, \\xi_Q)|^2) dy . \n\\end{eqnarray*}\nWe now again use $P_{l,\\alpha}^\\xi \\sigma(y, \\xi_Q)\\sim P_{l,\\alpha}^\\xi \\sigma(y, \\eta)$ for any\n$\\eta\\in Q$, this time to estimate the contribution of $\\sum_{Q} |P_{l,\\alpha}^\\xi \\sigma(y, \\xi_Q)|^2$. \nThis is done as follows. Expand \n\\begin{equation}\n\\label{eq:859}\nP_{l, \\alpha} ^\\xi \\sigma(y, \\xi_Q)= \\sum\\limits_{\\beta: |\\beta|\\geq 0}^\\infty\n\\f{D_\\eta^\\beta P_{l, \\alpha}^\\eta \\sigma(y, \\eta)}{\\beta!} (\\xi_Q-\\eta)^\\beta,\n\\end{equation}\nto be used for $\\eta\\in Q$. Thus, if we average over $Q$, \n\\begin{eqnarray*}\n& & |P_{l, \\alpha} ^\\xi \\sigma(y, \\xi_Q)|=\\left(|Q|^{-1} \\int_Q |\\sum\\limits_{\\beta: |\\beta|\\geq 0}^\\infty\n\\f{D_\\xi^\\beta P_{l, \\alpha}^\\xi \\sigma(y, \\eta)}{\\beta!} (\\xi_Q-\\eta)^\\beta|^2 d\\eta\n\\right)^{1\/2}\\leq \\\\\n& &\\leq |Q|^{-1\/2} \\sum\\limits_{\\beta: |\\beta|\\geq 0}^\\infty \\f{C_n^{|\\beta|} \n2^{-l|\\beta|}}{\\beta!} \\left(\\int\\limits_Q | D_\\xi^\\beta P_{l, \\alpha}^\\xi \\sigma(y, \\eta)|^2 d\\eta\\right)^{1\/2}.\n\\end{eqnarray*}\nand so (recalling $|Q|\\sim 2^{-l n}$)\n\\begin{eqnarray*}\n& &(\\sum\\limits_{Q} |P_{l,\\alpha}^\\xi \\sigma(y, \\xi_Q)|^2)^{1\/2}\\leq C 2^{l n\/2} \\sum\\limits_{\\beta} \\f{C_n^{|\\beta|} \n2^{-l|\\beta|}}{\\beta!} \\|D_\\xi^\\beta P_{l, \\alpha}^\\xi \\sigma(y, \\cdot)\\|_{L^2} \\leq \\\\\n& & \\leq \nC 2^{l n\/2} \\|P_{l, \\alpha}^\\xi \\sigma(y, \\cdot)\\|_{L^2}.\n\\end{eqnarray*}\nThus, \n\\begin{eqnarray*}\n& & \\|T_\\sigma^* g\\|_{L^2} \\lesssim \\sum\\limits_{l, \\alpha} 2^{l n\/2} (\\alpha!)^{-1} \\left(\\int |g(y)|^2 \n \\|P_{l, \\alpha}^\\xi \\sigma(y, \\cdot)\\|_{L^2}^2 dy\\right)^{1\/2}\\lesssim \\\\\n & &\\lesssim \n \\sum\\limits_{l, \\alpha} 2^{l n\/2} (\\alpha!)^{-1} \\|g\\|_{L^2} \\sup\\limits_y \\|P_{l, \\alpha}^\\xi \\sigma(y, \\cdot)\\|_{L^2}. \n\\end{eqnarray*}\nFurthermore, \n$$\n\\sup_y \\|P_{l,\\alpha}^\\xi \\sigma(y,\\cdot)\\|_{L^2}\\leq C_n^{|\\alpha|} \n\\sup_y \\|P_{l}^\\xi \\sigma(y,\\cdot)\\|_{L^2}.\n$$\nPut everything together \n\\begin{eqnarray*}\n& & \\|T_\\sigma^* g\\|_{L^2}\\leq C_n \n\\norm{g}{L^2}\\sum\\limits_{\\alpha} (\\alpha!)^{-1} C_n^{|\\alpha|} \n\\sum_l 2^{ln\/2} \n \\sup_y \\|P_{l}^\\xi \\sigma(y,\\cdot)\\|_{L^2} \n\\leq \\\\\n& & \\leq D_n \\norm{g}{L^2} \\sum_l 2^{ln\/2} \n \\sup_y \\|P_{l}^\\xi \\sigma(y,\\cdot)\\|_{L^2}, \n\\end{eqnarray*}\nas desired. \n\\subsection{$L^p$ estimates: $23$. Also since $P_k:L^1\\to L^1$, we get \n$$\n\\|\\widehat{\\psi^\\alpha_{l,Q}}\\|_{L^1}\\leq C_n^{|\\alpha|} \n\\|P_k[\\widehat{\\psi_{l,Q}}]\\|_{L^1}\\leq C_n^{|\\alpha|} \n\\|\\widehat{\\psi_{l,Q}}\\|_{L^1}\\leq \nC_n^{|\\alpha|}.\n$$\nThus, it remains to show for every $x$ and for {\\it any}\n$\\{\\xi_Q\\}, \\xi_Q\\in Q$\n\\begin{equation}\n\\label{eq:35} \n\\sum\\limits_\\alpha \\f{2^{-l|\\alpha|}}{\\alpha!} \\sum\\limits_Q \n|D_\\xi^\\alpha P_l^\\xi \\sigma(x, \\xi_Q)| \\leq C_n \n2^{ln} \\sup_y \\|P_l^\\xi \\sigma(y, \\cdot)\\|_{L^1({\\mathbf R}^n)}. \n\\end{equation}\nThis is done similar to the $L^2$ case. By \\eqref{eq:859} and by averaging over \nthe corresponding $Q$\n\\begin{eqnarray*}\n& & \\sum\\limits_\\alpha \\f{2^{-l|\\alpha|}}{\\alpha!} \\sum\\limits_Q \n|D_\\eta^\\alpha P_l^\\eta \\sigma(x, \\xi_Q)|\\leq \\sum\\limits_{\\alpha, \\beta} \\f{2^{-l|\\alpha|}}{\\alpha!\\beta!} \n \\sum\\limits_Q |Q|^{-1} \\int_Q |D_\\eta^{\\alpha+\\beta} P_l^\\eta \\sigma(x, \\eta)(\\eta-\\xi_Q)^\\beta|d\\eta\\\\\n & & \\lesssim 2^{ln} \\sum\\limits_{\\alpha, \\beta} \\f{2^{-l|\\alpha|}}{\\alpha!\\beta!} \n \\int |D_\\eta^{\\alpha+\\beta} P_l^\\eta \\sigma(x, \\eta)(\\eta-\\xi_Q)^\\beta|d\\eta\\lesssim 2^{ln} \\|P_l^\\eta \\sigma(x,\n \\cdot)\\|_{L^1} \n\\end{eqnarray*}\n\n\n\n\n\\section{$L^p$ estimates for homogeneous of degree zero symbols}\nWe start with the $L^2$ estimate, since it is very similar to the corresponding \nestimate \\eqref{eq:7} and contains the main ideas for the $L^p$ estimate. \n\n\\subsection{$L^2$ estimates for homogeneous of degree zero symbols}\nConsider $T_\\sigma^*$ and introduce the Littlewood-Paley partition of unity $P_l^{\\xi\/|\\xi|}$. We have \n$$\nT_\\sigma^* g (x)=\\sum\\limits_{l=0}^\\infty \\int e^{2\\pi i \\xi\\cdot x} \n(\\int e^{-2\\pi i \\xi\\cdot y} \n[ g(y) P_l^{\\xi\/|\\xi|} q(y, \\xi\/|\\xi|)] dy )d\\xi\n$$ \nFor every $l\\geq 0$, introduce a partition of unity on $\\mathbf S^{n-1}$, say $\\{K\\}$, which consists of disjoint \nsets of diameter comparable to $2^{-l}$. One may form $\\{ K\\}$ by introducing a $2^{-l}$ net on $\\mathbf S^{n-1}$, say $\\xi^m_{l}$, form the conic sets $H_m^l=\\{\\xi\\in{\\mathbf R}^n: \\: |\\xi\/|\\xi|-\\xi^m_l|\\leq 2^{-l}\\}$ and construct \\\\ $K^l_m=H_m^l\\setminus \\cup_{j=0}^{m-1} H_{j}^l$. We have \n\\begin{equation}\n\\label{eq:400}\nT_\\sigma^* g (x)=\\sum\\limits_{l=0}^\\infty \\sum\\limits_m \\int e^{2\\pi i \\xi\\cdot x} \\chi_{K^l_m}(\\xi)\n(\\int e^{-2\\pi i \\xi\\cdot y} \n[ g(y) P_l^{\\xi\/|\\xi|} q (y, \\xi\/|\\xi|)] dy )d\\xi\n\\end{equation}\nNow, that the symbol is frequency localized around frequencies $\\sim 2^l$ \n and the sets $K^l_m\\cap \\mathbf S^{n-1}$ have diameters less \nthan $2^{-l}$, we expand $q(y, \\xi\/|\\xi|)$ around an {\\it arbitrary point} $\\theta_l^m\\in K_l^m$.\nAccording to \\eqref{eq:36}, we have for all $\\xi\\in K^l_m$, \n$$\nq(y, \\xi\/|\\xi|) \n=\\sum\\limits_{\\alpha\\geq 0} \\f{D_\\xi^\\alpha q(y,\\theta_l^m)}{\\alpha!}(\\xi\/|\\xi|-\\theta_l^m)^\\alpha.\n$$\nEntering this new expression in \\eqref{eq:400} yields \n\\begin{eqnarray*}\n& & \nT_\\sigma^* g (x)=\\sum\\limits_{l=0}^\\infty \\sum\\limits_m \\sum\\limits_{\\alpha} (\\alpha!)^{-1} \n\\int e^{2\\pi i \\xi\\cdot x} \\chi_{K^l_m}(\\xi) (\\xi\/|\\xi|-\\theta_l^m)^\\alpha \\times \\\\\n& & \\times \n(\\int e^{-2\\pi i \\xi\\cdot y} \n[ g(y) D_\\xi^\\alpha P_l^{\\xi\/|\\xi|} q (y, \\theta_l^m)] dy )d\\xi= \\\\\n& & =\n\\sum\\limits_{l=0}^\\infty \\sum\\limits_m \\sum\\limits_{\\alpha} (\\alpha!)^{-1} Z_{l,m}^\\alpha [g(\\cdot) 2^{-l|\\alpha|} \n D_\\xi^\\alpha P_l^{\\xi\/|\\xi|} q (\\cdot , \\theta_l^m)] ,\n\\end{eqnarray*}\nwhere $Z_{l,m}^\\alpha$ is given by the multiplier $\\chi_{K^l_m}(\\xi) 2^{l |\\alpha|} \n(\\xi\/|\\xi|-\\theta_l^m)^\\alpha$. Note the disjoint support of the multipliers \n$\\{Z_{l,m}^\\alpha\\}_m$ and $\\|Z_{l, m}^\\alpha\\|_{L^2\\to L^2}=\\sup_\\xi|\\chi_{K^l_m}(\\xi) 2^{l |\\alpha|} \n(\\xi\/|\\xi|-\\theta_l^m)^\\alpha| \\leq 4^{|\\alpha|}$. \\\\\nTake $L^2$ norm of $T_\\sigma^* g$. \n\\begin{eqnarray*}\n& & \\norm{T_\\sigma^* g}{L^2({\\mathbf R}^n)}\\lesssim \n\\sum\\limits_{l=0}^\\infty \\sum\\limits_{\\alpha} (\\alpha!)^{-1}\\left(\\sum\\limits_m \n\\norm{Z_{l, m}^\\alpha [g(\\cdot) 2^{-l|\\alpha|} \n D_\\xi^\\alpha P_l^{\\xi\/|\\xi|} q (\\cdot , \\theta_l^m) ]}{L^2}^2 \\right)^{1\/2}\\leq \\\\\n& &\\leq 4^{|\\alpha|} \\sum\\limits_{l=0}^\\infty \\sum\\limits_{\\alpha} (\\alpha!)^{-1} \\left(\\sum\\limits_m \\norm{g(\\cdot) 2^{-l|\\alpha|} \n D_\\xi^\\alpha P_l^{\\xi\/|\\xi|} q (\\cdot , \\theta_l^m) }{L^2}^2\\right)^{1\/2}.\n\\end{eqnarray*}\nWe proceed to further bound the expression in the $m$ sum. Since\n$$\n\\sum\\limits_m \\norm{g(\\cdot) 2^{- l|\\alpha|} \n D_\\xi^\\alpha P_l^{\\xi\/|\\xi|} q (\\cdot , \\theta_l^m)}{L^2}^2=2^{-2 l|\\alpha|} \\int |g(y)|^2 \\left(\\sum\\limits_m \n| D_\\xi^\\alpha P_l^{\\xi\/|\\xi|} q (y , \\theta_l^m)|^2\\right) dy, \n$$\nmatters reduce to a good estimate for $\\sum\\limits_m \n| D_\\xi^\\alpha P_l^{\\xi\/|\\xi|} q (y , \\theta_l^m)|^2$. \nWe proceed as before. By \\eqref{eq:36}, we get for all $\\eta\\in K^l_m\\cap \\mathbf S^{n-1}$, \n\\begin{equation}\n\\label{eq:n3}\n D_\\xi^\\alpha P_l^{\\xi\/|\\xi|} q(y, \\theta_l^m) \n=\\sum\\limits_{\\beta\\geq 0} \\f{D_\\xi^{\\alpha+\\beta} P_l^{\\eta\/|\\eta|}q(y,\\eta)}{\\beta!}(\\theta_l^m-\\eta)^\\beta.\n\\end{equation}\nAveraging over $K^l_m\\cap \\mathbf S^{n-1}$ and taking into account $|K^l_m\\cap \\mathbf S^{n-1}|\\sim 2^{l(n-1)}$ yields \n\\begin{eqnarray*}\n& & (\\sum\\limits_m \n| D_\\xi^\\alpha P_l^{\\xi\/|\\xi|} q (y , \\theta_l^m)|^2)^{1\/2} \\lesssim \\\\\n& & \\lesssim \\sum\\limits_{\\beta} \\f{2^{-l |\\beta|}}{\\beta!}\n(\\sum\\limits_m \n|K^l_m\\cap \\mathbf S^{n-1}|^{-1} \\int\\limits_{K^l_m\\cap \\mathbf S^{n-1}} |D_\\xi^{\\alpha+\\beta} P_l^{\\eta\/|\\eta|}q(y,\\eta)|^2\nd\\eta)^{1\/2}\\\\\n& &\\lesssim 2^{l(n-1)\/2} \\sum\\limits_{\\beta} \\f{2^{-l |\\beta|}}{\\beta!} \n\\|D_\\xi^{\\alpha+\\beta} P_l^{\\eta\/|\\eta|}q(y,\\cdot)\\|_{L^2} \\\\\n& &\\lesssim 2^{l[(n-1)\/2+|\\alpha|]} \n\\sum\\limits_{\\beta} \\f{C_n^{|\\alpha|+|\\beta|}}{\\beta!} \n\\|P_l^{\\eta\/|\\eta|}q(y,\\cdot)\\|_{L^2(\\mathbf S^{n-1})} \\\\ \n& &\\leq C_n^{|\\alpha|}2^{l[|\\alpha|+(n-1)\/2]}\n\\|P_l^{\\eta\/|\\eta|}q(y,\\cdot)\\|_{L^2(\\mathbf S^{n-1})}.\n\\end{eqnarray*}\nPutting this back into the estimate for $\\|T_\\sigma^* g\\|_{L^2({\\mathbf R}^n)}$ implies \n$$\n\\norm{T_\\sigma^* g}{L^2({\\mathbf R}^n)}\\lesssim \\|g\\|_{L^2} \\sum\\limits_l 2^{l(n-1)\/2} \\sup\\limits_y \n\\|P_l^{\\eta\/|\\eta|}q(y,\\cdot)\\|_{L^2(\\mathbf S^{n-1})}\n$$\nas desired. \n\\subsection{$L^p$ estimates for homogeneous of degree zero multipliers}\nFix $p: 2\\leq p<\\leq \\infty$. \nTo verify the estimate $\\|T\\|_{B^0_{p,1}\\to L^p}$, it will suffice to fix $k$ and show \n\\begin{equation}\n\\label{eq:01}\n\\|T(P_k f)\\|_{L^p}\\leq C \\|f\\|_{L^p}.\n\\end{equation}\nFurthermore, by the scale invariance of the quantity $\\sum_l 2^{l(n-1)} \n\\sup_y\\|P_l^{\\xi\/|\\xi|} q(y, \\cdot)\\|_{L^1(\\mathbf S^{n-1})}$ this is equivalent to verifying \\eqref{eq:01} \nonly for $k=0$. \nThat is, it suffices to establish \nthe $L^p, p\\geq 2$ boundedness of the operator \n$$\nG f(x)=\\int\\limits_{{\\mathbf R}^n} q(x,\\xi\/|\\xi|) e^{2\\pi i \\xi x} \\varphi(|\\xi|) \\hat{f}(\\xi) d\\xi.\n$$\nprovided the multiplier $m$ satisfies \n$\\sum\\limits_l 2^{l(n-1)} \\sup_y\\|P_l^{\\xi\/|\\xi|} q(y, \\cdot)\\|_{L^1(\\mathbf S^{n-1})}<\\infty$. \n\nNext, we make the angular decomposition as in the case of the $L^2$ estimates for the adjoint \noperator $G^*$.However, this time we will have to be more careful and \ninstead of the rough cutoffs $\\chi_{K^l_m}$, we shall use a \nsmoothed out versions of them. Fix $l$. Choose and fix a \n $2^{-l}$ net $\\theta_m^l\\in K_m^l\\cap \\mathbf S^{n-1}$, so that the family \n$\\{\\theta\\in \\mathbf S^{n-1}: \n|\\theta_m^l-\\theta|\\leq 2^{-l}\\}_m$ has the finite intersection\nproperty. \nIntroduce a family of functions $\\varphi_{l,m}:{\\mathbf R}^n\\to [0,1]$, so that for every $\\xi\\in {\\mathbf R}^n$, \n\\begin{eqnarray}\n\\label{eq:fun}\n& & \\sum\\limits_m \\varphi_{l,m} (2^l(\\xi\/|\\xi|-\\theta_m^l))=1 \\\\ \n\\nonumber\n& & \\sup_\\eta |D^\\beta_\\eta \\varphi_{l,m} (\\eta)|\\leq C_\\beta.\n\\end{eqnarray}\nIn other words, the family of functions $\\{\\varphi_{l,m}\\}$ provides a \nsmooth partition of unity, subordinated to the cover $\\{K_m^l\\}$. \n\nAs before, write \n$$\nG^* g(x)=\\sum\\limits_{l\\geq 0} \\int\\limits_{{\\mathbf R}^n} e^{2\\pi i \\xi x} \\varphi(|\\xi|) \\int e^{-2\\pi i \\xi y} [g(y) \nP_l^{\\xi\/|\\xi|} q(y, \\xi\/|\\xi|)]dy d\\xi.\n$$\nInserting the partition of unity discussed above into the ($l^{th}$ term of the) \nlast formula for $G^*$ yields \n$$\nG^* g (x)=\\sum\\limits_{l\\geq 0} \\sum\\limits_m \\int e^{2\\pi i \\xi (x-y)} \n\\varphi_{l,m} (2^l(\\xi\/|\\xi|-\\theta_m^l)) \\varphi(|\\xi|) [g(y) \nP_l^{\\xi\/|\\xi|} q(y, \\xi\/|\\xi|)]dy d\\xi. \n$$\nFollowing the same strategy as before, we expand $q(y, \\xi\/|\\xi|)$ around $\\theta_m^l\\in K_m^l$.\nAccording to \\eqref{eq:36}, we have \n$$\nP_l^{\\xi\/|\\xi|} q(y, \\xi\/|\\xi|) \n=\\sum\\limits_{\\alpha\\geq 0} \\f{D_\\xi^\\alpha P_l^{\\xi\/|\\xi|} \nq(y, \\theta_m^l) }{\\alpha!}(\\xi\/|\\xi|-\\theta_m^l)^\\alpha.\n$$\nOf course, the last formula is useful only when \n$|\\xi\/|\\xi|-\\theta_m^l|\\lesssim 2^{-l}$, in particular on the \nsupport of $\\varphi_{l,m} (2^l(\\xi\/|\\xi|-\\xi_m^l)) $. \nThis gives us the representation \n\\begin{eqnarray*}\n& & G^* g =\\sum\\limits_{l\\geq 0}\\sum\\limits_m \\sum\\limits_{|\\alpha|\\geq 0} (\\alpha!)^{-1}\n\\int e^{2\\pi i \\xi x} \n\\varphi_{l,m} (2^l(\\xi\/|\\xi|-\\theta_m^l)) (\\xi\/|\\xi|-\\theta_m^l)^\\alpha \\varphi(|\\xi|) \\times \\\\\n& &\\times \\int e^{2\\pi i \\xi y} g(y) \nP_l^{\\xi\/|\\xi|} D_\\xi^\\alpha P_l^{\\xi\/|\\xi|} q(y, \\theta_m^l)dy d\\xi= \\\\\n& & =\n\\sum\\limits_{l\\geq 0}\\sum\\limits_m \\sum\\limits_{|\\alpha|\\geq 0}(\\alpha!)^{-1} Z_{l,m}^\\alpha \n[g(\\cdot) 2^{-l|\\alpha|} D_\\xi^\\alpha P_l^{\\xi\/|\\xi|} q(\\cdot, \\theta_m^l)]\n\\end{eqnarray*}\nwhere \n$$\n\\widehat{Z_{l,m}^\\alpha f}(\\xi)= \\varphi_{l,m} (2^l(\\xi\/|\\xi|-\\theta_m^l)) 2^{l|\\alpha|} \n(\\xi\/|\\xi|-\\theta_m^l)^\\alpha \n\\varphi(|\\xi|) \\hat{f}(\\xi)=\\varphi_{l,m}^\\alpha (\\xi\/|\\xi|-\\theta_m^l)\\varphi(|\\xi|) \\hat{f}(\\xi).\n$$\nTaking $L^p$ norm of $G^* g$, we get \n\\begin{eqnarray*}\n& & \\|G^* g\\|_{L^p}\\leq \\sum\\limits_{l\\geq 0} \\sum\\limits_{|\\alpha|\\geq 0}(\\alpha!)^{-1}\\|\\sum\\limits_m Z_{l,m}^\\alpha \n[g(\\cdot) 2^{-l|\\alpha|} D_\\xi^\\alpha P_l^{\\xi\/|\\xi|} q(\\cdot, \\theta_m^l)]\\|_{L^p}\n\\end{eqnarray*}\nLemma \\ref{le:sum} in the Appendix allows us to treat expressions of the type \n$\\|\\sum\\limits_m Z_{l,m}^\\alpha g_m^\\alpha\\|_{L^p}$. Indeed, according to \\eqref{eq:n1}, we have \n\\begin{eqnarray*}\n& & \\|\\sum\\limits_m Z_{l,m}^\\alpha \n[g(\\cdot) 2^{-l|\\alpha|} D_\\xi^\\alpha P_l^{\\xi\/|\\xi|} q(\\cdot, \\theta_m^l)]\\|_{L^p}\\lesssim (\\sum\\limits_m \n\\|g(\\cdot) 2^{-l|\\alpha|} D_\\xi^\\alpha P_l^{\\xi\/|\\xi|} q(\\cdot, \\theta_m^l)\\|_{L^p}^p)^{1\/p} \\\\\n& &= 2^{-l|\\alpha|} (\\int |g(y)|^p (\\sum\\limits_m \n |D_\\xi^\\alpha P_l^{\\xi\/|\\xi|} q(y, \\theta_m^l)|^p) dy)^{1\/p} \n\\end{eqnarray*}\nBy virtue of \\eqref{eq:n3}, we get \n\\begin{eqnarray*}\n& & \nD_\\xi^\\alpha P_l^{\\xi\/|\\xi|} q(y, \\theta_l^m) \n=\\sum\\limits_{\\beta\\geq 0} \\f{D_\\xi^{\\alpha+\\beta} P_l^{\\eta\/|\\eta|}q(y,\\eta)}{\\beta!}(\\theta_l^m-\\eta)^\\beta.\n\\end{eqnarray*}\nwhence by averaging\\footnote{this step is identical to the one performed earlier for the \n$L^2$ bounds, except that now the $l^2$ sums are replaced by $l^p$ sums.} over $K^l_m\\cap \\mathbf S^{n-1}$, \n\\begin{eqnarray*}\n & & (\\sum\\limits_m \n| D_\\xi^\\alpha P_l^{\\xi\/|\\xi|} q (y , \\theta_l^m)|^p)^{1\/p} \\lesssim \\\\\n& & \\lesssim \\sum\\limits_{\\beta} \\f{2^{-l |\\beta|}}{\\beta!}\n(\\sum\\limits_m \n|K^l_m\\cap \\mathbf S^{n-1}|^{-1} \\int\\limits_{K^l_m\\cap \\mathbf S^{n-1}} |D_\\xi^{\\alpha+\\beta} P_l^{\\eta\/|\\eta|}q(y,\\eta)|^p\nd\\eta)^{1\/p}\\\\\n& &\\lesssim 2^{l(n-1)\/p} \\sum\\limits_{\\beta} \\f{2^{-l |\\beta|}}{\\beta!} \n\\|D_\\xi^{\\alpha+\\beta} P_l^{\\eta\/|\\eta|}q(y,\\cdot)\\|_{L^p(\\mathbf S^{n-1})} \\\\\n& &\\lesssim 2^{l[(n-1)\/p+|\\alpha|]} \n\\sum\\limits_{\\beta} \\f{C_n^{|\\alpha|+|\\beta|}}{\\beta!} \n\\|P_l^{\\eta\/|\\eta|}q(y,\\cdot)\\|_{L^p(\\mathbf S^{n-1})} \\\\ \n& &\\leq C_n^{|\\alpha|}2^{l[|\\alpha|+(n-1)\/p]}\n\\|P_l^{\\eta\/|\\eta|}q(y,\\cdot)\\|_{L^p(\\mathbf S^{n-1})}.\n\\end{eqnarray*}\nAll in all, \n\\begin{eqnarray*}\n& & \\|G^* g\\|_{L^p}\\leq C_n \\|g\\|_{L^p}\\sum\\limits_{l\\geq 0} \\sum\\limits_{|\\alpha|\\geq 0}(\\alpha!)^{-1} \n C_n^{|\\alpha|}2^{l(n-1)\/p}\n\\sup\\limits_y \\|P_l^{\\eta\/|\\eta|}q(y,\\cdot)\\|_{L^p(\\mathbf S^{n-1})}\\\\ \n& &\\leq C_n \\|g\\|_{L^p} \\sum\\limits_l \n2^{l(n-1)\/p}\n\\sup\\limits_y \\|P_l^{\\eta\/|\\eta|}q(y,\\cdot)\\|_{L^p(\\mathbf S^{n-1})}.\n\\end{eqnarray*}\nas desired. \n\n\\section{Counterexamples}\n\\label{sec:counter}\n\\subsection{Theorem \\ref{theo:1} is sharp.}\nGiven $p>2$, we will construct \n an explicit symbol $\\sigma(x, \\xi)$, \nso that the corresponding PDO $T_\\sigma$ is not bounded on $L^2({\\mathbf R}^n)$, \nbut which satisfies \\\\\n$\\sup_x |D_\\xi^\\alpha \\sigma(x, \\xi)|\\leq C_\\alpha |\\xi|^{-|\\alpha|}$ and \n$\\sup_x \\norm{\\sigma(x, \\cdot)}{W^{p,n\/p}}<\\infty$. \nThe construction is a minor modification of the \nstandard example of a symbol in $S^0_{1, 1}$, which \nis not bounded on $L^2$, see for example \\cite{Stein}, page 272. \nWe carry out the construction in $n=1$, \nbut this can be easily generalized to higher dimensions.\n\nFor the given $p>2$, fix small $0<\\delta<1\/2$, \nso that\\footnote{The reason for this choice of $\\delta$ will become apparent in the proof below.}\n $2+4\\delta\/(1-2\\delta)1$, \n$$\n\\sup_x |D_\\xi^\\alpha \\sigma(x, \\xi)|\\sim |\\xi|^{-|\\alpha|} \\ln^{\\delta-1\/2}(|\\xi|) \n\\leq |\\xi|^{-|\\alpha|}.\n$$\nFinally, to \nestimate $\\sup_x \\norm{\\sigma(x, \\cdot)}{W^{p,1\/p}}$, write \n$$\n\\sigma=\\sum\\limits_{s=3}^\\infty \\sum\\limits_{j=2^s}^{2^{s+1}} \\f{e^{-2\\pi i 2^j x}}{j^{1\/2-\\delta}}\n \\varphi(2^{-j} \\xi)=:\\sum\\limits_{s=3}^\\infty \\sigma^s,\n$$\nBy the convexity of the norms, we have with $\\theta: 1\/p=\\theta\/2$, \n$$\n\\norm{\\sigma^s(x, \\cdot)}{W^{p,1\/p}}\\leq \n\\norm{\\sigma^s(x, \\cdot)}{H^{1\/2}}^\\theta\n\\norm{\\sigma^s(x, \\cdot)}{L^\\infty}^{(1-\\theta)}\n$$\nIt is now easy to compute the norms on the right hand side. We have \n$$\n\\sup_x \\norm{\\sigma^s(x, \\cdot)}{H^{1\/2}} \n\\sim (\\sum_{j=2^{s}}^{2^{s+1}} \\f{1}{j^{1-2\\delta}})^{1\/2}\n\\sim 2^{\\delta s}.\n$$\nOn the other hand, \n$$\n\\norm{\\sigma^s(x, \\cdot)}{L^\\infty}\\sim 2^{-s(1\/2-\\delta)},\n$$\nwhence $\\sup_x \\norm{\\sigma^s(x, \\cdot)}{W^{p,1\/p}}\\leq \n2^{s(\\delta\\theta-(1\/2-\\delta)(1-\\theta))}$. Clearly, \nsuch an expression dyadically sums in $s\\geq 3$, \nprovided $\\delta\\theta<(1\/2-\\delta)(1-\\theta)$ or equivalently $p>2+4\\delta\/(1-2\\delta)$.\n\\subsection{Proposition \\ref{prop:5}: Theorem \\ref{theo:4} is sharp}\n\\begin{proof}(Proposition \\ref{prop:5}) \nWe construct a sequence of symbols $\\sigma_\\delta:\\mathbf R^2\\times \\mathbf R^2\\to\\mathbf R^1$, \nso that for a fixed Schwartz function $f$\n$$\n\\lim_{\\delta\\to 0+} |T_{\\sigma_\\delta} f|=|H_* f(x)|=\\sup_{u\\in \\mathbf S^1} |H_u f(x)|\n$$\nSince we already know, \\cite{LL}, that \n$H_*$ is {\\it unbounded} on $L^2(\\mathbf R^2)$, \nwe should have\n\\begin{equation}\n\\label{eq:715}\n\\limsup_{\\delta\\to 0+} \\|T_{\\sigma_\\delta}\\|_{L^2\\to L^2}=\\infty. \n\\end{equation}\nIn our construction \n$\\sigma_\\delta$ will depend on $f$, but it is still clear \nthat one can achieve \\eqref{eq:715}. \nNamely, take a sequence $f_N: \\norm{f_N}{L^2(\\mathbf R^2)}=1$, \nso that $\\|H_* (f_N)\\|_{L^2(\\mathbf R^2)}\\geq N$. \nThen construct $\\sigma_{N, \\delta}$, so that $\\lim_{\\delta\\to 0+} \n|T_{\\sigma_{\\delta, N}} f_N|=\nH_* f_N$. Then clearly, \\\\\n$\\limsup_{ N\\to \\infty, \\delta\\to 0+} \n\\|T_{\\sigma_{N,\\delta}}\\|_{L^2\\to L^2}=\\infty$. \n\nNow, from the $L^2$ boundedness results of \nTheorem \\ref{theo:4} (or rather the lack thereof), we must have \n\\begin{equation}\n\\label{eq:713}\n\\limsup_{\\delta\\to 0} \\sum\\limits_l 2^{l\/2} \n\\sup_x \\|P_l^{\\xi\/|\\xi|} \\sigma_\\delta(x, \\cdot)\\|_{L^2(\\mathbf S^1)}=\\infty.\n\\end{equation}\nOn the other hand, we will see that \n$\\sup_{x, \\xi, \\delta}|\\sigma_\\delta(x, \\xi)|\\leq 1$ and \n\\begin{equation}\n\\label{eq:714}\n\\sup_{\\delta, x} \\|\\sigma_\\delta(x, \\cdot)\\|_{W^{1, 1}(\\mathbf S^1)}<\\infty. \n\\end{equation}\nNote in contrast that (at least heuristically) \\eqref{eq:713} states \n$$\n\\limsup_{\\delta\\to 0} \\sup_x \\|\\sigma_\\delta(x, \\cdot)\\|_{B^{1\/2}_{2, 1}}=\\infty\n$$\nand by the Sobolev embedding estimate on the sphere \n\\eqref{eq:bern} (and up to the usual Besov spaces \nadjustments at the endpoints), one should have that the quantity \nin \\eqref{eq:714} (at least in principle) controls \\eqref{eq:713}. Having both \n\\eqref{eq:713} and \\eqref{eq:714} for a concrete example suggests that \nthe conditions imposed in Theorem \\ref{theo:4} \n are extremely tight. \n\nLet us now describe the construction of $\\sigma_\\delta$. First of all, \n\\begin{eqnarray*}\nH_*f(x) = \\sup_{u\\in\\mathbf S^1} |H_u f(x)| &=& \\sup_{u\\in\\mathbf S^1}|\\int \nsgn(u\\cdot \\xi\/|\\xi|) \\hat{f}(\\xi) e^{2\\pi i \\xi\\cdot x} d\\xi|=\\\\\n&=&\n|\\int \nsgn(u(x)\\cdot \\xi\/|\\xi|) \\hat{f}(\\xi) e^{2\\pi i \\xi\\cdot x} d\\xi|,\n\\end{eqnarray*}\nfor some measurable function $u(x):\\mathbf R^1\\to \\mathbf S^1$. \nClearly $u(x)$ will depend on the function $f$, see the remarks above after \\eqref{eq:715}. \\\\\nIntroduce a function $\\psi:\\psi\\in C^\\infty, -1\\leq \\psi(x)\\leq 1$, and \nso that $\\psi(z)=-1: z\\in (-\\infty, -1]$, $\\psi(z)=1: \nz\\in [1, \\infty)$. Clearly \n$$\nH_* f(x)=\\lim_{\\delta\\to 0+} T_{\\sigma_\\delta} f(x)= \\lim_{\\delta\\to 0+}|\\int \n\\psi\\left(\\f{u(x)\\cdot \\xi\/|\\xi|}{\\delta}\\right) \n\\hat{f}(\\xi) e^{2\\pi i \\xi\\cdot x} d\\xi|, \n$$\nthat is $\\sigma_\\delta(x, \\xi\/|\\xi|)=\\psi(\\f{u(x)\\cdot \\xi\/|\\xi|}{\\delta})$, \nfor which we will verify \\eqref{eq:714}, while it \nis clearly bounded in absolute value by one. \n\nWe pause \nfor a second to comment on the particular form of $T_{\\sigma_\\delta}$. Note that the function $u(x)$ \nin general will not be smooth\\footnote{Note that under some extra smoothness assumptions on $u$, \nLacey and Li have managed to prove $L^2$ boundedness!} and therefore will not fall under \nthe scope of any standard boundedness theory for PDO. Also, note that while the map \n $\\xi\\to \\sigma_\\delta(x, \\xi)$ is definitely smooth, its derivatives are quite large and \nblow up at the important limit $\\delta\\to 0$. This shows that in order to \ntreat maximal operators, build upon singular multipliers (as is the case here), \none needs the full strength of Theorems \\ref{theo:1}, \\ref{theo:4} and beyond. \n\nGoing back to the proof of \\eqref{eq:714}, compute \n\\begin{eqnarray*}\n& & \\f{\\partial \\sigma}{\\partial_{\\xi_1}}= \n\\f{\\psi'(\\f{u(x)\\cdot \\xi\/|\\xi|}{\\delta})}{\\delta|\\xi|^3}\\left(u_1(x)\\xi_2^2- u_2(x)\\xi_1\\xi_2\\right)= \n\\f{\\psi'(\\f{u(x)\\cdot \\xi\/|\\xi|}{\\delta})\\xi_2}{\\delta|\\xi|^2}u(x)\\cdot \n(\\xi\/|\\xi|)^{\\perp} \\\\\n& & \\f{\\partial \\sigma}{\\partial_{\\xi_2}}=\n\\f{\\psi'(\\f{u(x)\\cdot \\xi\/|\\xi|}{\\delta})}{\\delta|\\xi|^3}\n\\left(u_1(x)\\xi_1^2- u_2(x)\\xi_1\\xi_2\\right)= \\f{\\psi'(\\f{u(x)\\cdot \n\\xi\/|\\xi|}{\\delta})\\xi_1}{\\delta|\\xi|^2} u(x)\\cdot \n(\\xi\/|\\xi|)^{\\perp} \n\\end{eqnarray*}\nClearly, the supports of both derivatives are in \n$\\xi: |u(x)\\cdot \\xi\/|\\xi||\\leq \\delta<<1$. Also, on their support, \n$|\\nabla \\sigma(x, \\xi\/|\\xi|)|\\sim C \\delta^{-1}$. It follows \n$$\n\\|\\sigma_\\delta(x, \\cdot)\\|_{W^{1, 1}(\\mathbf S^1)}\\leq \n\\int_{\\xi\\in \\mathbf S^1: |u(x)\\cdot \\xi\/|\\xi|\\leq \\delta}\n|\\nabla \\sigma_\\delta(x, \\xi)|d\\xi\\leq C,\n$$\nwhere $C$ is independent of $\\delta$. This was the claim in \\eqref{eq:714}. \n\\end{proof}\n\n\n\n\n\n\\section{Appendix}\n\n\\subsection{Estimates for Fourier transforms of functions supported on small spherical caps.}\nIn this section, we present a pointwise estimate for the kernels of \nmultipliers that restrict the Fourier transform to a small spherical cap. \n\\begin{lemma}\n\\label{le:900}\nLet $\\theta\\in \\mathbf S^{n-1}$ and \n$\\varphi$ is a $C^\\infty$ function with $supp \\ \\varphi\\subset \\{\\xi: 1\/2\\leq |\\xi|\\leq 2\\}$. \nLet also $l>0$ be any integer. Define $K_{l, \\theta}$ to be the inverse \nFourier transform of $\\varphi(2^l(\\xi\/|\\xi|-\\theta))\\varphi(|\\xi|)$, that is \n$$\nK_{l, \\theta} (x)=\\int \\varphi(2^l(\\xi\/|\\xi|-\\theta))\\varphi(|\\xi|) e^{2\\pi i x\\cdot \\xi} d\\xi.\n$$\nThen, for every $N>0$, there exists $C_N$, so that \n\\begin{equation}\n\\label{eq:201}\n|K_{l, \\theta}(x)|\\leq C_N 2^{-l(n-1)}(1+|\\dpr{x}{\\theta}|)^{-N} \n(1+2^{-l}|x-\\dpr{x}{\\theta}\\theta|)^{-N}.\n\\end{equation}\nThat is, in the direction of $\\theta$, the function has any polynomial decay, while in the directions transversal to $\\theta$, one has decay like $(2^{-l})^{-N}$, where $x=\\dpr{x}{\\theta}\\theta+x'$. \nIn particular, \n\\begin{equation}\n\\label{eq:202}\n\\sup_{\\theta, l} \\int |K_{l, \\theta}(x)| dx\\leq C_n<\\infty,\n\\end{equation}\nwhere the constant $C_n$ depends on $\\|\\varphi\\|_{L^\\infty}$ and \nthe smoothness properties of $\\varphi$. \n\\end{lemma} \n\\begin{proof}\nBy rotation symmetry, we can assume without loss of generality that $\\theta=e_1=(1, 0, \\ldots, 0)$. \nFix $l$ and drop the subindices for notational convenience. \nWe will need to show that for every \n$x=x_1 e_1+x'$, \n\\begin{equation}\n\\label{eq:203}\n|K(x)|\\leq C_N 2^{-l(n-1)} ^{-N} <2^{-l} x'>^{-N}\n\\end{equation}\nFirst of all, by support considerations, one has \n$|K(x)|\\leq C_n 2^{-l(n-1)}.$ \\\\\nNext, we will show that integration by parts in the variable $\\xi_1$ \nyields\n\\begin{equation}\n\\label{eq:700} \nK(x)= x_1^{-1} \\tilde{K}(x),\n\\end{equation}\nwhereas integration by parts in each of the variables $\\xi_j: j=2, \\ldots, n$ yield \n\\begin{equation}\n\\label{eq:701} \nK(x)=(2^{-l} x_j)^{-1} \\tilde{K}(x),\n\\end{equation}\nwhere $\\tilde{K}(x)$ is different in each instance, but it has the form \n$$\n\\tilde{K}(x)=\\int \\varphi_1(2^l(\\xi\/|\\xi|-e_1))\\varphi_2(|\\xi|) e^{2\\pi i x\\cdot \\xi} d\\xi.\n$$\nfor some $C^\\infty$ \nfunctions\\footnote{As we shall see the functions $\\varphi_1, \\varphi_2$ are obtained in a specific way \nfrom $\\varphi$ via the operations \ndifferentiation and multiplication by monomial.} \n$\\varphi_1, \\varphi_2$ with $supp \\ \\varphi_{k}\\subset \\{\\xi: 1\/2\\leq |\\xi|\\leq 2\\}, k=1,2$. \n\nThat is enough to deduce \\eqref{eq:203} and thus Lemma \\ref{le:900}. \nIndeed, by iterating \\eqref{eq:700} and \\eqref{eq:701}, one gets the formula \n$$\nK(x)= x_1^{-N_1} (\\prod_{j=2}^n (2^{-l} x_j)^{N_j})^{-1} \\tilde{K}_{N_1, \\ldots, N_n}(x),\n$$ \nfor any $n$ tuple of integers $(N_1, N_2, \\ldots, N_n)$. Combining this representation with the \nestimate $|\\tilde{K}_{N_1, \\ldots, N_n}(x)|\\leq C_{n, N_1, \\ldots, N_n} 2^{-l(n-1)}$, one deduces \\eqref{eq:203}. \n\nFor \\eqref{eq:700}, integration by parts yields \n\\begin{eqnarray*}\n& & K(x)=-\\f{1}{2\\pi i x_1} \\int \\partial_{\\xi_1} [\\varphi(2^l(\\xi\/|\\xi|-e_1))\\varphi(|\\xi|)] \ne^{2\\pi i x\\cdot \\xi} d\\xi = \\\\\n& & =\n-\\f{1}{2\\pi i x_1}\\int \\sum_{j=2}^n 2^l \\f{\\xi_j^2}{|\\xi|^2}\n \\partial_1\\varphi(2^l(\\xi\/|\\xi|-e_1)) \n\\varphi(|\\xi|)|\\xi|^{-1} e^{2\\pi i x\\cdot \\xi} d\\xi+\\\\\n& & \n+\\f{1}{2\\pi i x_1} \\int \\sum_{j=2}^n 2^l \\f{\\xi_j\\xi_1}{|\\xi|^2} \n\\partial_j \\varphi(2^l(\\xi\/|\\xi|-e_1))\\varphi(|\\xi|)|\\xi|^{-1} e^{2\\pi i x\\cdot \\xi} d\\xi+\\\\\n& &- \\f{1}{2\\pi i x_1}\\int \\varphi(2^l(\\xi\/|\\xi|-e_1)) \\varphi'(|\\xi|)\\f{\\xi_1}{|\\xi|} \ne^{2\\pi i x\\cdot \\xi} d\\xi\n\\end{eqnarray*}\nThe third term is clearly in the form $x_1^{-1} \\tilde{K}(x)$, by taking into account that $supp \\varphi\\subset \\{\\xi: 1\/2\\leq |\\xi|\\leq 2\\}$. \\\\\nThe second term above can be rewritten in the form \n\\begin{eqnarray*}\n& &\n\\f{1}{2\\pi i x_1} \\int \n\\varphi_1(2^l(\\xi\/|\\xi|-e_1))\\f{\\xi_1}{|\\xi|}\\varphi(|\\xi|)|\\xi|^{-1} e^{2\\pi i x\\cdot \\xi} d\\xi=\\\\\n& & = \\f{1}{2\\pi i x_1}\\int \n[\\varphi_1(2^l(\\xi\/|\\xi|-e_1))+2^{-l} \n\\tilde{\\varphi}_{1}(2^l(\\xi\/|\\xi|-e_1))]\\varphi(|\\xi|)|\\xi|^{-1} e^{2\\pi i x\\cdot \\xi} d\\xi=\\\\\n& & = x_1^{-1} \\tilde{K}(x),\n\\end{eqnarray*}\nwhere $\\varphi_1(\\eta)=\\sum_{j=2}^n \\eta_j \\partial_{\\eta_j}\\varphi(\\eta)$ and \n$\\tilde{\\varphi}_1(\\eta)=\\eta_1\\varphi_1(\\eta)$. \\\\\nAnalogously, one can rewrite the first term of $K(x)$ in the form\n$2^{-l} x_1^{-1} \\tilde{K}(x)$, i.e. it has an extra decay factor of $2^{-l}$. \nThis establishes \\eqref{eq:700}. \n\nFor \\eqref{eq:701}, we obtain by integration by parts in $\\xi_j, 2\\leq j\\leq n$, \n\\begin{eqnarray*}\n& & K(x)=\\f{1}{2\\pi i x_j} \\int \\partial_j\\varphi(2^l(\\xi\/|\\xi|-e_1))\\f{2^l \\xi_1 \\xi_j}{|\\xi|^3} \\varphi(|\\xi|) \ne^{2\\pi i x\\cdot \\xi} d\\xi + \\\\\n& &+ \\f{1}{2\\pi i x_j} \\sum\\limits_{k\\neq j, k=2}^n \n\\int \\partial_k\\varphi(2^l(\\xi\/|\\xi|-e_1))\\f{2^l \\xi_k \\xi_j}{|\\xi|^3} \\varphi(|\\xi|) \ne^{2\\pi i x\\cdot \\xi} d\\xi+\\\\\n& &- \\f{1}{2\\pi i x_j} \\int \\partial_j \\varphi(2^l(\\xi\/|\\xi|-e_1)) 2^l(\\sum_{k\\neq j, k=1}^n \n\\f{\\xi_k^2}{|\\xi|^2}) \\varphi(|\\xi|) e^{2\\pi i x\\cdot \\xi} d\\xi + \\\\\n& &- \\f{1}{2\\pi i x_j}\\int \\varphi(2^l(\\xi\/|\\xi|-e_1)) \\varphi'(|\\xi|)\\f{\\xi_j}{|\\xi|} \ne^{2\\pi i x\\cdot \\xi} d\\xi\n\\end{eqnarray*}\nBy performing similar analysis as in the proof of \\eqref{eq:700}, we easily see that the first term above \nis in the form $x_j^{-1} \\tilde{K}(x)$, the second and the fourth terms are in fact even better, since \nthey are in the form $2^{-l}x_j^{-1} \\tilde{K}(x)$. The third term has two types of terms. Clearly, \n$$\n\\f{1}{2\\pi i x_j} \\int \\partial_j \\varphi(2^l(\\xi\/|\\xi|-e_1)) 2^l(\\sum_{k\\neq j, k=2}^n \n\\f{\\xi_k^2}{|\\xi|^2}) \\varphi(|\\xi|) e^{2\\pi i x\\cdot \\xi} d\\xi +\n$$\nis of the form $2^{-l}x_j^{-1} \\tilde{K}(x)$, while lastly, \n$$\n\\f{1}{2\\pi i x_j} \\int \\partial_j \\varphi(2^l(\\xi\/|\\xi|-e_1)) \n2^l \\f{\\xi_1^2}{|\\xi|^2} \\varphi(|\\xi|) e^{2\\pi i x\\cdot \\xi} d\\xi\n$$\nis of the form $2^l x_j^{-1} \\tilde{K}(x)$, as is the statement of \\eqref{eq:701}. \n\\end{proof}\n\\subsection{$l^p$ functions of cone multipliers}\nIn this section, we discuss a simple extension of Lemma \\ref{le:900}, \nwhich is concerned with appropriate \n $L^p$ bounds \nfor $l^p$ functions of such cone multipliers. \n\\begin{lemma}\n\\label{le:sum}\nLet $l>>1$ and $\\{\\theta_m^l\\}_{m}$ be a $2^{-l}$ net in $\\mathbf S^{n-1}$, so that the family \n$\\{\\theta\\in \\mathbf S^{n-1}: \n|\\theta_m^l-\\theta|\\leq 2^{-l}\\}_m$ has the finite intersection property. Define \n$$\n\\widehat{P_m f}(\\xi)=\\varphi_{l, m}(2^l(\\xi\/|\\xi|-\\theta_m^l)) \\varphi(|\\xi|) \\hat{f}(\\xi).\n$$\nwhere $\\varphi_{l, m}$ are as in \\eqref{eq:fun}.\nThen one has \n\\begin{eqnarray}\n\\label{eq:n1}\n& & \\|\\sum\\limits_m P_m g_m\\|_{L^p({\\mathbf R}^n)}\\leq C \\left(\\sum\\limits_m \n\\|g_m\\|_{L^p}^p\\right)^{1\/p}\\quad\\quad \\textup{if}\\quad 1\\leq p\\leq 2 \\\\\n\\label{eq:n2}\n& & (\\sum\\limits_m \\|P_m g\\|_{L^q({\\mathbf R}^n)}^q )^{1\/q}\\leq C \\|f\\|_{L^q}\\quad\\quad \\textup{if}\\quad 2\\leq\nq\\leq \\infty.\n\\end{eqnarray}\n\\end{lemma}\n\\begin{proof} \nSince \\eqref{eq:n1} and \\eqref{eq:n2} are dual, it will suffice to check \\eqref{eq:n2}. Next, the\n$L^2$ estimate is trivial by the Plancherel's theorem and the finite intersection property of the\nsupports of $\\varphi_{l, m}(2^l(\\xi\/|\\xi|-\\theta_m^l))$. Thus, by interpolation it suffices to check \n$$\n\\sup\\limits_m \\|P_m g\\|_{L^\\infty}\\leq C\\|g\\|_{L^\\infty}.\n$$\nBut $P_m g(x)=K_{l, \\theta_m^l}*g(x)$ and so \n$$\n\\|P_m g\\|_{L^\\infty}\\leq \\|K_{l, \\theta_m^l}\\|_{L^1}\\|g\\|_{L^\\infty}\\leq C \\|g\\|_{L^\\infty}.\n$$\nwhere the last inequality follows from \\eqref{eq:202}.\n\\end{proof}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nWe study equivalence classes of decompositions of $S^3$ and also decompositions of other manifolds.\nThese decompositions are given by toroidal defining sequences \n(we use the term {toroidal} for a subspace of an $n$-dimensional manifold \nbeing homeomorphic to the disjoint union of finitely many copies of $S^{n-2} \\times D^2$) \nalthough more generally\nit would be possible to get similar results by considering handlebodies instead of solid tori in the \ndefining sequences. \nThe problem of classifying decompositions was studied by many authors.\nBy \\cite{Sh68b} so-called Antoine decompositions in $\\mathbb R^3$ are equivalently embedded if and only if\ntheir toroidal defining sequences can be mapped into each other by homeomorphisms of the stages.\nMore generally \\cite{ALM68} \nfor a decomposition $G$ of $\\mathbb R^3$ given by an arbitrary \ndefining sequence made of handlebodies\nthe homeomorphism type of the pair $(\\mathbb R^3 \/ G, \\mathrm {cl} \\thinspace \\pi_G ( H_G))$, where\n$\\pi_G$ is the decomposition map and \n$H_G$ is the union of the non-degenerate elements,\nis determined by the homeomorphism types of the consecutive stages of the defining sequence of $G$.\nBy \\cite{GRWZ11} two Bing-Whitehead decompositions of $S^3$ are equivalently embedded if and only if\nthe stages of the toroidal defining sequences\nare homeomorphic to each other after some number of iterations (counting only the Bing stages).\nDecompositions given by defining sequences are upper semi-continuous\nand many shrinkability conditions are known about them.\nFor example, Bing-Whitehead decompositions are shrinkable under some conditions \\cite{AS89, KP14}\njust like Antoine's necklaces, which are wild Cantor sets.\nIn \\cite{Ze05} the maximal genus of handlebodies being associated with a defining sequence \nis used to study Cantor sets. \n\nIn the present paper we define the concordance of decompositions (see Section~\\ref{concdef}) \nwhich come with \ntoroidal defining sequences. As for knots, slice decompositions \nplay an important role in the classification: a decomposition is slice if \neach component of a defining sequence is slice in a way that the $D^{n-1} \\times D^2$ thickened slice disk stages are nested into each other.\nBeing concordant means the analogous concordance of the solid tori in the defining sequence\nand this makes the well-known knot and link concordance invariants possible to apply\nin order to distinguish between the concordance classes of such decompositions.\nFor example, we show that \nthe concordance group of decompositions of $S^3$, where the defining sequences have some intrinsic properties, \nhas at least uncountably many elements, see Theorem~\\ref{uncount1}.\nThe uncountably many elements that we find are represented by Antoine's necklaces. \n\n\nDecompositions appear in studying manifolds, where cell-like \nresolutions of homology manifolds \\cite{Qu82, Qu83, Qu87, Th84, Th04} provide a tool of \nobtaining topological manifolds. \nDecompositions also appear\nin the proof of the Poincar\\'e conjecture in dimension four, see \\cite{Fr82, FQ90, BKKPR21}, where\n a cell-like decomposition of a $4$-dimensional manifold yields \n a decomposition space which is a topological manifold. \nIn higher dimensions the decomposition space given by a cell-like decomposition of a compact topological manifold\n is a homology manifold being also a topological manifold if it satisfies the disjoint disk property \\cite{Ed16}. \n A particular result \\cite{Ca78, Ca79, Ed80, Ed06} is that the double suspension of every integral homology $3$-sphere\nis homeomorphic to $S^5$, \nthat is for every homology $3$-sphere $H$ \nthere is a cell-like decomposition $G$ of $S^5$ such that \nthe decomposition space is the homology manifold $\\Sigma^2 H$\n and since $\\Sigma^2 H$ satisfies the disjoint disk property,\n the decomposition $G$ is shrinkable (and this implies that the decomposition space is $S^5$). \n\nBeside concordance, \n we also define and study another equivalence relation, which is \n the cobordism of decompositions, see Definition~\\ref{borddecomp}\nand Section~\\ref{computebord}. This yields a cobordism group, which has\na natural homomorphism into \nthe cobordism group of homology manifolds \\cite{Mi90, Jo99, JR00}. We study how homological manifolds are related to \nthe cobordism group of cell-like decompositions via taking the decomposition space. \nIt turns out that \nevery such decomposition space is cobordant to a topological manifold in the cobordism group of homology manifolds and\nthey generate a subgroup isomorphic to the cobordism group of topological manifolds, \n see Proposition~\\ref{bordkep}. \n Often we state and prove our results only for unoriented cobordisms but all the arguments\n obviously work for the oriented cobordisms as well giving the corresponding results. \n\nThe paper is organized as follows.\nIn Section~\\ref{prelim}\nwe give some basic lemmas and the definitions of the most important notions and \nin Section~\\ref{results}\nwe state and prove our main results.\n\nThe author would like to thank the referee for the helpful comments, which improved the paper. \n\n\n\\section{Preliminaries}\\label{prelim}\n\n\n\n\\subsection{Cell-like decompositions}\n\nThroughout the paper we suppose that \nif $X$ is a compact manifold with boundary and \n$Y$ is a compact manifold with corners, then \nan embedding $e \\co Y \\to X$ is such that \nthe corners of $Y$ are mapped into $\\partial X$ and the pairs of boundary components near the corners of $Y$ are \nmapped into $\\int X$ and into $\\partial X$, respectively.\nWe also suppose that $e(\\int Y) \\subset \\int X$. \n If $Y$ has no corners, then \n$\\partial X \\cap e(Y) = \\emptyset$. \n We generalize the notions of defining sequence, cellular set and cell-like set in the obvious way \nfor manifolds with boundary as follows. Recall that a decomposition of a topological space $X$ is \n a collection of pairwise disjoint subsets of $X$ whose union is equal to $X$. \n\n\\begin{defn}[Defining sequence for a subset]\\label{defseq}\nLet $X$ be an $n$-dimensional manifold with possibly non-empty boundary.\n A \\emph{defining sequence} for a subset $C \\subset X$ \n is a sequence \n $$c \\co \\mathbb N \\to \\mathcal P(X)$$\n $$C_0, C_1, C_2, \\ldots, C_n, \\ldots$$ of compact $n$-dimensional \n submanifolds-with-boundary possibly with corners \n in $X$ such\n that \n \\begin{enumerate}\n \\item\n every\n$C_{n+1}$ has a neighbourhood $U$ such that \n$U \\subset C_n$, \n\\item\nin every component of $C_n$ there is a component of $C_{n+1}$,\n\\item\n$\\cap_{n=0}^{\\infty} C_n = C$ and\n\\item\nif $\\partial X \\neq \\emptyset$, then \nthere is an $\\varepsilon > 0$ such that \n$\\partial X \\times [0, \\varepsilon)$ is a collar neighbourhood \nof $\\partial X$ and for every $C_n$ such that \n $C_n \\cap \\partial X \\neq \\emptyset$\n we have $C_n \\cap (\\partial X \\times [0, \\varepsilon)) = (C_n \\cap \\partial X) \\times [0, \\varepsilon)$. \n\\end{enumerate}\n A decomposition of $X$ defined by the defining sequence $c$ is the triple \n $(X, \\mathcal D, C)$, where \n $C = \\cap_{n=0}^{\\infty} C_n$ and \n the elements of $\\mathcal D \\subset \\mathcal P(X)$ \n are \n\\begin{enumerate}\n\\item\nthe connected components of $C$ and \n\\item\nthe points in $X -C$.\n\\end{enumerate}\nWe denote the decomposition map by $\\pi$. \n\\end{defn}\n\nObserve that for a decomposition $(X, \\mathcal D, C)$ the set $C$ is non-empty and each of the non-degenerate elements is a subset of $C$. \nThere could be singletons in $C$ as well. \nFor example in the case of an Antoine's necklace there are no non-degenerate elements, we choose\n $C$ to be the Cantor set Antoine's necklace itself and so $C$ consists of singletons. \nEvery decomposition defined by some defining sequence is upper semi-continuous. \nA decomposition $\\mathcal D$ of a manifold induces a decomposition on its boundary by intersecting the decomposition elements\nwith the boundary. \nThe decomposition of the boundary $\\partial X$ induced by a defining sequence in $X$ is upper semi-continuous.\n This induced decomposition is given by an induced defining sequence $C_{n} \\cap \\partial X$\n if $D_{n, k} \\cap \\partial X \\neq \\emptyset$ for every component $D_{n, k}$ of each $C_n$. \n If all $C_n$ in a defining sequence are connected, then $\\cap_{n=0}^{\\infty} C_n$ is connected.\n\n\n\n\\begin{defn}[Cell-like set]\nA compact subset $C$ of a metric space $X$ is \\emph{cell-like} if \nfor every neighbourhood $U$ of $C$ there is a neighbourhood $V$ of $C$ in $U$ such that\nthe inclusion map $V \\to U$ is homotopic in $U$ to a constant map.\nA decomposition is called cell-like if\neach of its decomposition elements is cell-like.\n\\end{defn}\n\nCell-like sets given by defining sequences are connected \nbecause if the connected components could be separated by open neighbourhoods, then \n a homotopy could not deform the set into one single point in the neighbourhoods.\n\n\nA space $X$ is \\emph{finite dimensional} if for every open cover $\\mathcal U$ of $X$ there exists\na refinement $\\mathcal V$ of $\\mathcal U$ such that no points of $X$ lies in more \nthan $K_X$ of the elements of $\\mathcal V$, where $K_X$ is a constant depending only on $X$.\n\n\n\\begin{lem}\\label{dimdecomp}\nLet $\\mathcal D$ be a decomposition of a manifold $X$ possibly with non-empty boundary\ngiven by a defining sequence.\n Then the decomposition space $X \/ \\mathcal D$ \n is finite dimensional.\n\\end{lem}\n\\begin{proof}\nIf $X$ has no boundary, then the statement follows from \n Theorem~2 and Proposition~3 in \\cite[Chapter~34]{Da86}.\n If $X$ has non-empty boundary, then \n the argument is also similar. \n\\end{proof}\n\n\n\\subsection{Homology manifolds}\n\n\n\nRecall that \na metric space $Y$ is an \\emph{absolute neighbourhood retract} (or \\emph{ANR} for short)\nif \nfor every metric space $Z$ and embedding $i \\co Y \\to Z$ such that \n$i(Y)$ is closed there is a neighbourhood $U$ of $i(Y)$ in $Z$ which retracts onto $i(Y)$, that is\n$r|_{i(Y)} = \\mathrm {id}_{i(Y)}$ for some map $r \\co U \\to i(Y)$.\nIt is a fact that every manifold is an ANR.\nA space is called a \\emph{Euclidean neighbourhood retract} (or \\emph{ENR} for short)\nif it can be embedded into a Euclidean space as a closed subset so that it is a retract of \nsome of its neighbourhoods. \nIt is well-known that \na space is an ENR if and only if it is a locally compact, finite dimensional, separable ANR. \n\n\n\n\n\n\n\n\n\\begin{defn}[Homology manifold]\nLet $n \\geq 0$ and let $X$ and $Y$ be finite dimensional ANR spaces, where\n$Y$ is a closed subset of $X$. \nSuppose that for every $x \\in X$ we have\n\\begin{enumerate}\n\\item\n$H_k ( X, X - \\{ x \\} ) = 0$ for $k \\neq n$ and \n\\item\n$H_n ( X, X - \\{ x \\} )$ is isomorphic to $\\mathbb Z$ if $x \\in X - Y$ and it is isomorphic to $0$ if $x \\in Y$.\n\\end{enumerate}\nThen $X$ is an $n$-dimensional \\emph{homology manifold}. \nThe set of points $x \\in Y$ \nare the \\emph{boundary points of $X$} and the set $Y$ is denoted by $\\partial X$.\nA homology manifold is called \\emph{closed} if it is compact and has no boundary.\n\\end{defn}\n\nSince locally compact and separable homology manifolds are ENR spaces, a locally compact and separable\n homology manifold is called an \\emph{ENR homology manifold}.\nIn \\cite{Mi90} it is proved that\nfor $n \\geq 1$ and for every compact and locally compact $n$-dimensional homology manifold $X$\n the set of boundary points $\\partial X$ \n is an $(n-1)$-dimensional homology manifold.\n\n\n\nSometimes a space $X$ without the ANR property but having \n$H_k ( X, X - \\{ x \\} ) = 0$ for $k \\neq n$ and \n$H_n ( X, X - \\{ x \\} ) = \\mathbb Z$ in the sense of \\v{C}ech homology is also called a homology manifold. These spaces arise as \nquotient spaces of acyclic decompositions of topological manifolds \n\\cite{DW83} while ANR homology manifolds are often homeomorphic to \nquotients of cell-like decompositions \\cite{Qu82, Qu83, Qu87}.\n\n\n\nIn the case of cell-like decompositions the decomposition spaces are homology manifolds if they are finite dimensional essentially because \nof the Vietoris-Begle theorem \\cite[Theorem~0.4.1]{DV09}. In more detail, we will use the following. \nLet $X'$ be a compact $n$-dimensional manifold with possibly non-empty boundary, let\n$Y$ be $\\partial X' \\times [0,1]$ and attach $Y$ to $X'$ as a collar to get a manifold $X$.\n\n\\begin{lem}\\label{decomphomolmanif}\nLet $\\mathcal D'$ be a cell-like decomposition of $X'$ given by a defining sequence such that \n$X'$ contains a small open set (intersecting the possibly non-empty boundary) which consists of singletons. \n Suppose that \n the induced decomposition on $\\partial X'$ is cell-like and it is given by the induced defining sequence. \n Suppose that \n in $Y$ a cell-like decomposition $\\mathcal E$ is given, where\n $\\mathcal E$ is the product of the decomposition \n induced by $\\mathcal D'$ on $\\partial X'$ and the trivial decomposition of $[0,1]$. \n Denote by $\\mathcal D$ the resulting decomposition on $X$. \nThen $X\/\\mathcal D$ \nis an $n$-dimensional ENR homology manifold with possibly non-empty boundary. \n The boundary points of $X\/\\mathcal D$ are exactly the points of the ENR homology manifold \n$\\pi (\\partial X)$.\n\\end{lem}\n\\begin{proof}\nWe have to show that \nthe quotient space \n$$X \/ \\mathcal D$$\nis an $n$-dimensional homology manifold with boundary the homology manifold\n$\\pi (\\partial X)$.\nTake the closed manifold $$X \\cup_{\\varphi} X,$$\nwhere\n$\\varphi \\co \\partial X \\to \\partial X$\nis the identity map.\n\nThe decomposition space $X' \/ \\mathcal D'$\n(that is the part of the decomposition space $X \/ \\mathcal D$ which is obtained from \n$X'$) \nis finite dimensional by Lemma~\\ref{dimdecomp}. \nThe doubling of the decomposition $\\mathcal D$ on \n$X \\cup_{\\varphi} X$\nyields\na finite dimensional quotient space, we get this by using \nestimations for the covering dimension, see \\cite{HW41} and \\cite[Corollary~2.4A]{Da86}.\nSo the decomposition space $P$ obtained by factorizing \n$X \\cup_{\\varphi} X$\nby the double of $\\mathcal D$ is a closed finite dimensional homology manifold by \\cite[Proposition~8.5.1]{DV09}. \nSince a small neighbourhood of a singleton results an open set in $P$ homeomorphic to $\\mathbb R^{n}$, it is $n$-dimensional.\nWe obtain the space $X \/ \\mathcal D$\nby cutting $P$ into two pieces along $\\pi(\\partial X)$. Because of a similar argument \nthe space $\\pi(\\partial X)$ is a closed $(n-1)$-dimensional homology manifold. \nThe set $\\pi(\\partial X)$ is closed in the decomposition space $X \/ \\mathcal D$ since\n$\\mathcal D$ is upper semi-continuous and $\\partial X$ is closed. \nAlso, the homology group $H_n(X \/ \\mathcal D; X \/ \\mathcal D - \\{p\\} )$ is equal to \n$0$ for every $p \\in \\pi(\\partial X)$. So $\\pi(\\partial X)$ is the boundary of $X \/ \\mathcal D$. \n\nMoreover the space $X \/ \\mathcal D$ is \n a locally compact separable metric space because $X$ is so. By \\cite[Corollary~7.4.8]{DV09}\nthe space $X\/\\mathcal D$ is an ANR so it follows that it is an ENR. The same holds for $\\pi(\\partial X)$. \n\\end{proof}\n\n\n\n\\begin{defn}[Cobordism of homology manifolds]\nThe closed $n$-dimensional \nhomology manifolds $X_1$ and $X_2$ are \\emph{cobordant} \nif there exists a compact $(n+1)$-dimensional homology manifold $W$ such that \n$\\partial W$ is homeomorphic to the disjoint union of $X_1$ and $X_2$.\nThe induced cobordism group (the group operation is the disjoint union) is \ndenoted by $\\mathfrak N_n^H$. In a similar way the induced oriented cobordism group is denoted by $\\Omega_n^H$.\n\\end{defn}\nNote that the connected sum of homology manifolds does not always exist. \nAnalogously let $\\mathfrak N_n^E$ and $\\Omega_n^E$ denote the cobordism group and\noriented cobordism group \nof ENR homology manifolds (the cobordisms are also ENR), respectively.\n\n\nAlmost all oriented cobordism groups $\\Omega_n^H$ are computed \\cite{BFMW96, Jo99, JR00}:\n\\[\n\\Omega_n^H = \n\\left \\{\n\\begin{array}{cc}\n\\mathbb Z & \\mbox{if $n = 0$} \\\\\n0 & \\mbox{if $n = 1, 2$} \\\\\n \\Omega_n^{TOP}[8\\mathbb Z +1] & \\mbox{if $n \\geq 6$},\n\\end{array}\n\\right.\n\\]\nwhere $\\Omega_n^{TOP}$ denotes the cobordism group of topological manifolds\n and the group $$\\Omega_n^{TOP}[8\\mathbb Z +1]$$ denotes \n the group of finite linear combinations \n $\\sum_{i \\in 8\\mathbb Z +1} \\omega_i i$ of cobordism classes of topological manifolds.\n By \\cite[Corollary~4.2]{Ma71} the oriented cobordism group of manifolds $\\Omega_n$ is always a subgroup\nof $\\Omega_n^H$.\n\n\nA \\emph{resolution} of a homology manifold $N$ \nis a topological manifold $M$ and a cell-like decomposition of $M$ \nsuch that the decomposition space is homeomorphic to the homology manifold $N$,\nthe quotient map $\\pi$ is proper \n and\n$\\pi^{-1} (\\partial N) = \\partial M$. \nBy \\cite{Qu82, Qu83, Qu87} homology manifolds are resolvable if a local obstruction is equal to $1$, more precisely we have the following.\n\\begin{thm}[\\cite{Qu82, Qu83, Qu87}]\\label{resol}\nFor every $n \\geq 4$ and every non-empty connected $n$-dimensional ENR homology manifold $N$ there is an integer local obstruction $i(N) \\in 8\\mathbb Z + 1$\nsuch that \n\\begin{enumerate}[\\rm (1)]\n\\item\nif $U \\subset N$ is open, then $i(U) = i(N)$,\n\\item\nif $\\partial N \\neq \\emptyset$, then \n$i(\\partial N) = i(N)$, \n\\item\n$i(N \\times N_1) = i(N) i(N_1)$ for any other homology manifold $N_1$,\n\\item\nif $\\dim N = 4$ and $\\partial N$ is a manifold, then there is a resolution if and only if $i(N)=1$ and \n\\item\nif $\\dim N \\geq 5$, then there is a resolution if and only if $i(N)=1$.\n\\end{enumerate}\n\\end{thm}\nBy \\cite{Th84, Th04} a closed $3$-dimensional ENR homology manifold $N$ is resolvable if \nits singular set has general position dimension less than or equal to one, that is\nany map of a disk into $N$ can be approximated by one whose image meets the singular set \n (i.e.\\ the set of non-manifold points) \nof $N$ in a $0$-dimensional set.\n\n\n\\begin{lem}\\label{resolcob}\nLet $M_{1}$ and $M_{2}$ be two closed $n$-dimensional manifolds, where $n \\geq 4$.\nIf both of them are resolutions of the ENR homology manifold $N$, then \n$M_1$ and $M_2$ are cobordant as manifolds.\n\\end{lem}\n\\begin{proof}\nIf there are two resolutions $f_1 \\co M_1 \\to N$ and $f_2 \\co M_2 \\to N$ of \n a closed $n$-dimensional homology manifold $N$, then \n as in the proof of \\cite[Theorem~2.6.1]{Qu82}\n take a resolution \n $$Y \\to X_{f_1} \\cup X_{f_2}$$ \n of the double mapping cylinder $X_{f_1} \\cup X_{f_2}$ \n of the maps $f_1$ and $f_2$ \n by applying \\cite[Theorem~1.1]{Qu83} and \\cite{Qu87}. \n This resolution exists because $X_{f_1} \\cup X_{f_2}$ is an $(n+1)$-dimensional ENR homology manifold and \n $i( X_{f_1} \\cup X_{f_2} ) =1$.\n Let \n $$X_{f_1} \\cup X_{f_2} \\to N \\times [-1,1]$$\nbe the natural map of the double mapping cylinder \nonto $N \\times [-1,1]$, where the target $N$ of the two mapping cylinders is mapped \nonto $N \\times \\{ 0 \\}$. \n \nIt follows that the composition \n $$Y \\to X_{f_1} \\cup X_{f_2} \\to N \\times [-1,1]\n$$ \n is a resolution, moreover by \\cite[Theorem~1.1]{Qu83}\n the cell-like map \n $Y \\to X_{f_1} \\cup X_{f_2}$ can be chosen \nso that \n it is a homeomorphism over the boundary hence\n $Y$ is a cobordism between $M_1$ and $M_2$.\n\\end{proof}\n\n\n\n\n\n\n\\subsection{Concordance and cobordism of decompositions}\\label{concdef}\n\n\nWe will study decompositions given by defining sequences $C_0, C_1, C_2, \\ldots$ such that \neach $C_n$ is a disjoint union of solid tori. \n We remark that more generally all the following notions work for decompositions whose stages are handlebodies instead of just tori.\nIn a closed $n$-dimensional manifold $M$\n instead of decompositions $(M, \\mathcal D, A)$\nwe will consider decompositions with \nsome thickened link which contains the set $A$ so \n in the following\n a decomposition in $M$ is a quadruple $(M, \\mathcal D, A, L)$, where $L \\subset M$ is the thickened link and $A \\subset L$.\nFor example an Antoine's necklace is situated inside an unknotted \nsolid torus while it can be knotted in many different ways in the solid torus.\n \n\n\n\\begin{defn}[Concordance of decompositions]\\label{concordecomp}\nLet $M_1$ and $M_2$ be closed $n$-dimensional manifolds. \nThe decompositions\n $(M_1, \\mathcal D_1, A, L_1)$ and $(M_2, \\mathcal D_2, B, L_2)$\nare \\emph{cylindrically related}\nif \nthere exist\n toroidal defining sequences\n$C_0, C_1, C_2, \\ldots$ for $A$ and $D_0, D_1, D_2, \\ldots$ for $B$ and \nthere exists \na defining sequence $E_0, E_1, E_2, \\ldots$ for a decomposition $\\mathcal E$ \nof a compact $(n+1)$-dimensional manifold $W$ such that \n\\begin{enumerate}\n\\item\n$C_0 = L_1$ and $D_0 = L_2$, \n\\item\n$\\partial W = M_1 \\sqcup M_2$,\n\\item\neach $E_i$ is homeomorphic to $C_i \\times [0,1]$ and \n\\item\neach $E_i$ bounds the components of $C_i \\subset M_1$ and $D_i \\subset M_2$ that is \n $C_i \\times \\{ 0 \\}$ corresponds to $C_i$ and $C_i \\times \\{ 1 \\}$ corresponds to $D_i$. \n\\end{enumerate}\nTwo decompositions $(M_1, \\mathcal D_1, A, L_1)$ and $(M_2, \\mathcal D_2, B, L_2)$\n are \\emph{concordant} if \n there exist closed $n$-dimensional manifolds\n $M_1', \\ldots, M_k'$ and decompositions $(M_i', \\mathcal D_i', A_i', L_i')$ for every $i = 1, \\ldots, k$ \n such that \n \\begin{enumerate}\n \\item\n $(M_1, \\mathcal D_1, A, L_1)$ is cylindrically related to \n $(M_1', \\mathcal D_1', A_1', L_1')$, also for $i = 1, \\ldots, k-1$ every \n $(M_i', \\mathcal D_i', A_i', L_i')$ is cylindrically related to \n $(M_{i+1}', \\mathcal D_{i+1}', A_{i+1}', L_{i+1}')$\n and \n $(M_k', \\mathcal D_k', A_k', L_k')$ is cylindrically related to\n $(M_2, \\mathcal D_2, B, L_2)$ and \n \\item\nfor each $A_i' \\subset M_i'$, where $i = 1, \\ldots, k$, the two toroidal defining sequences \n $C_{i, 0}', C_{i, 1}', \\ldots$ and $C_{i, 0}'', C_{i, 1}'', \\ldots$ in $M_i'$ \n appearing in these successive cylindrically related decompositions\n are such that the $0$-th stages \n $C_{i, 0}'$ and $C_{i, 0}''$ are equal as subsets of $M_i'$.\n\\end{enumerate}\n Being concordant is an equivalence relation \n and the equivalence classes are called \\emph{concordance classes}.\n \\end{defn}\n \nHence being concordant implies that the two decompositions\n are in the same equivalence class of \nthe equivalence relation generated by being cylindrically related, that is \nthe two decompositions can be connected by a finite\nnumber of cylindrically related decompositions. \n Being concordant also implies that \n the $0$-th stages of two toroidal defining sequences for the two decompositions\n are connected by a single concordance in the usual sense. \nClearly in the definition \neach $E_i$ intersects some fixed collar of $\\partial W$ as the defining sequence in (4) of Definition~\\ref{defseq}.\n The concordance classes form a \ncommutative semigroup under the operation ``disjoint union\". Moreover this \nsemigroup is a monoid because the neutral element \nis the ``empty manifold'', that is the empty set $\\emptyset$.\nTo have a more meaningful neutral element we define the following.\n\n\n\\begin{defn}[Slice decomposition]\\label{slicedecomp}\nLet $M$ be a closed $n$-dimensional manifold and \nlet $(M, \\mathcal D, A, L)$ \nbe a decomposition of $M$ such that \nthere exists a toroidal defining sequence $C_0, C_1, C_2, \\ldots$ with $C_0 = L$ for $A$. \nThen $(M, \\mathcal D, A, L)$ is \\emph{slice} if \nit is concordant to a decomposition $(M', \\mathcal D', A', L')$\nwith defining sequence $C_0', C_1', C_2', \\ldots$ with $C_0' = L'$ \n such that\nthere exists\n a defining sequence $E_0, E_1, E_2, \\ldots$ for a decomposition $\\mathcal E$ \nof the $(n+1)$-dimensional manifold $M' \\times [0,1)$, where \neach $E_i$ consists of finitely many $D^{n-1} \\times D^2$ bounding the torus components $S^{n-2} \\times D^2$ of $C_i' \\subset M' \\times \\{0\\}$. \n \\end{defn}\n \n Analogously \nto Definitions~\\ref{concordecomp} and \\ref{slicedecomp}, we \ndefine the \\emph{oriented concordance} of decompositions by requiring \nall the manifolds to be oriented in the usual consistent way, \nin this way we also get a corresponding monoid. \nObserve that the set of concordance classes of slice decompositions is a submonoid \nof the monoid of concordance classes of decompositions. \n To obtain a group \nwe factor out the concordance classes by the classes represented by the slice decompositions\nand also by the classes of the form $$[(M, \\mathcal D, A, L)] + [(-M, \\mathcal D, A, L)],$$\nwhere $-M$ denotes the opposite orientation. Observe that all these classes form a submonoid. \n\n\\begin{defn}[Decomposition concordance group]\nDefine the relation $\\sim$ on the set of concordance classes of decompositions by the following rule: \n$a \\sim b$ exactly if\nthere exist slice decompositions $s_1$ and $s_2$ \nand decompositions $(M, \\mathcal D, A, L)$ and \n$(M', \\mathcal D', A', L')$ \nsuch that \n$\n a + [s_1] + [( M, \\mathcal D, A, L )] + [(- M, \\mathcal D, A, L )] = \n b + [s_2] + \n [( M', \\mathcal D', A', L')] + [(- M', \\mathcal D', A', L')].\n$\nThe relation $\\sim$ is a congruence and we obtain a commutative group by factoring out by this \ncongruence. We call this group the \\emph{oriented decomposition concordance group} \nand denote it by $\\Gamma_n$.\n\\end{defn}\n\n\nIf we confine the closed $n$-dimensional manifolds to $S^n$ and the cobordisms to $S^n \\times [0,1]$, then we obtain something similar to the classical link concordance.\nFor the convenience of the reader we repeat the definitions. \n\n\\begin{defn}[Concordance group of decompositions in $S^n$]\\label{concordecompS}\nLet $(S^n, \\mathcal D_1, A, L_1 )$ and $(S^n, \\mathcal D_2, B, L_2 )$\nbe decompositions of $S^n$ in the complement of $\\infty$.\n They \nare \\emph{cylindrically related}\nif \n there exist\ntoroidal defining sequences\n$C_0, C_1, C_2, \\ldots$ for $A$ and $D_0, D_1, D_2, \\ldots$ for $B$ and \nthere exists \na defining sequence $E_0, E_1, E_2, \\ldots$ for a decomposition $\\mathcal E$ \nof the compact $(n+1)$-dimensional manifold $S^n \\times [0,1]$ in the complement of $\\{ \\infty\\} \\times [0,1]$ such that \n\\begin{enumerate}\n\\item\n$C_0 = L_1$ and $D_0 = L_2$, \n\\item\neach $E_i$ is homeomorphic to $C_i \\times [0,1]$ and \n\\item\neach $E_i$ bounds the components of $C_i \\subset S^n \\times \\{0\\}$ and $D_i \\subset S^n \\times \\{1\\}$. \n\\end{enumerate}\nTwo decompositions are \\emph{concordant} if \n\\begin{enumerate}\n\\item\nthey are in the same equivalence class of \nthe equivalence relation generated by being cylindrically related \nso the two decompositions can be connected by a finite\nnumber of cylindrically related decompositions\nand \n\\item\nthe $0$-th stages of the defining sequences \nappearing in this sequence of cylindrically related decompositions\n are concordant as thickened links in the usual sense.\n\\end{enumerate}\nThe obtained equivalence classes are called \\emph{concordance classes}. \n If two decompositions of $S^n$ are given by defining sequences, \nthen in the connected sum (at $\\infty$) of the two $n$-spheres the ``disjoint union'' \ninduces \n a commutative semigroup operation on the set of concordance classes. \n Then \n by factoring out by the \nsubmonoid of classes of slice decompositions and \nclasses of the form $[(S^n, \\mathcal D, A, L)] + [(-S^n, \\mathcal D, A, L)]$\nwe\n get a group called the \\emph{decomposition concordance group in $S^n$}.\n We denote this group by $\\Delta_n$.\n\\end{defn}\n\n For example, the Whitehead decomposition in $S^3$ \nis slice \\cite{Fr82} and the Bing decomposition in $S^3$ is also slice\nbecause the Bing double of the unknot is slice. Observe that \nthe Bing decomposition $(S^3, \\mathcal B, C)$ has only singletons, where $C$ is a wild Cantor set.\nAs another example, a defining sequence in $S^3$ given by the replicating pattern of a solid torus and inside of it \n a link made of a sequence of ribbon knots linked with each other circularly \n can yield a slice decomposition.\n\n\n\nSince being concordant implies that the two decompositions can be connected by a finite\nnumber of cylindrically related decompositions, all \ninvariants \n of concordance classes defined through defining sequences \n are invariant under choosing another defining sequence for the same decomposition (while leaving the $0$-th stage unchanged). \nFor $n = 3$ in the following we restrict ourselves only to such toroidal \ndefining sequences $C_0, C_1, C_2, \\ldots$ of decompositions of the closed $n$-dimensional manifolds \nin Definitions~\\ref{concordecomp}-\\ref{concordecompS}\nwhich \nsatisfy \nthe following conditions:\n\\begin{defn}[Admissible defining sequences and decompositions]\\label{adm}\nSuppose \n\\begin{enumerate}\n\\item\nfor $m \\geq 1$ each $C_m$ has at least four components in a component of $C_{m-1}$ and \neach component $T$ of $C_m$ \nis linked to exactly two other components of $C_m$ in the ambient space $S^3$ with algebraic linking number non-zero\nand the splitting number of $T$ and each of the other components is equal to $0$, \n\\item\nfor $m \\geq 1$ the components $A_1, \\ldots, A_k$ of $C_m$ which are in a component $D$ of $C_{m-1}$ \nare linked in such a way that \nif a component $A_i$ is null-homotopic in a solid torus $T$ whose boundary is disjoint from all $A_i$, then \nall $A_i$ are in this solid torus $T$,\n\\item\n$\\cap_{m=0}^{\\infty} C_m$ is not separated by and not contained in any \n $2$-dimensional sphere \n$S$ for which \n$S \\subset C_m$ for some $m$,\n\\item\n every embedded circle in the boundary of a component of $C_m$\nwhich bounds no $2$-dimensional disk in this boundary\ncannot be shrunk to a point in the complement of $\\cap_{m=0}^{\\infty} C_m$. \n\\end{enumerate}\nWe call such defining sequences and decompositions \\emph{admissible}. \n\\end{defn}\n\n\\begin{prop}\nIn the connected sum (at $\\infty$) of two $3$-spheres the ``disjoint union'' as in Definition~\\ref{concordecompS} \nof two admissible toroidal decompositions is an admissible toroidal decomposition.\n\\end{prop}\n\\begin{proof}\nChecking the conditions (1)-(4) in Definition~\\ref{adm} is obvious, details are left to the reader. \n\\end{proof}\n\n\n\n\nThen we denote the arising concordance group in $S^3$ by \n$\\Delta_3^a$. \nFor example, Antoine's necklaces (or Antoine's decompositions) for $n = 3$ \nhave defining sequences satisfying these conditions \\cite{Sh68b}. \nWe note that by \\cite{Sh68b} their defining sequences also have the property of \nsimple chain type, which means that \nthe torus components are unknotted and they are linked like the Hopf link. \n We have \nthe natural group homomorphisms\n$$\n\\Delta_3^a \\to \\Delta_3\\mbox{\\ \\ \\ \\ \\ and\\ \\ \\ \\ }\\Delta_3 \\to \\Gamma_3$$\nand also for arbitrary $n$ the group homomorphism\n$$\n\\Delta_n \\to \\Gamma_n.$$\nWe will show that \nthe number of elements of the group $\\Delta_3^a$ is at least uncountable. \n\n\n\nNow we define \\emph{cobordism} of decompositions, where \nwe restrict ourselves to \\emph{cell-like} decompositions (not necessarily admissible) at the cobordisms \nand at the representatives as well.\n\n\n\\begin{defn}[Cobordism of decompositions]\\label{borddecomp}\nLet $M_1$ and $M_2$ be closed $n$-dimensional manifolds and \nlet $(M_1, \\mathcal D_1, A)$ and $(M_2, \\mathcal D_2, B)$\nbe cell-like decompositions such that \nthere exist\n toroidal defining sequences\n$C_0, C_1, C_2, \\ldots$ for $A$ and $D_0, D_1, D_2, \\ldots$ for $B$. \nThen $(M_1, \\mathcal D_1, A)$ and $(M_2, \\mathcal D_2, B)$\nare \\emph{coupled}\nif there exists \na defining sequence $E_0, E_1, E_2, \\ldots$ for a cell-like decomposition $\\mathcal E$ \nof a compact $(n+1)$-dimensional manifold $W$ such that \n\\begin{enumerate}\n\\item\n$\\partial W = M_1 \\sqcup M_2$,\n\\item\neach $E_i$ is homeomorphic to the disjoint union \nof finitely many manifolds $P_j^{n-1} \\times D^2$, $j = 1, \\ldots m_i$, where all $P_j^{n-1}$ are \ncompact $(n-1)$-dimensional manifolds and \n\\item\neach $E_i$ bounds the components of $C_i$ and $D_i$.\n\\end{enumerate}\nWe attach a collar $\\partial W \\times [0,1]$ to $W$ along its boundary and extend the decomposition \n$\\mathcal E$ to the collar by taking the product of $\\mathcal D_1$ and \n$\\mathcal D_2$ with the trivial decomposition on $[0,1]$, respectively. We say that this extended manifold \n$W \\cup (\\partial W \\times [0,1])$ and its decomposition is a \\emph{coupling} between $(M_1, \\mathcal D_1, A)$ and $(M_2, \\mathcal D_2, B)$.\n Finally, two decompositions are \\emph{cobordant} if they are in the same equivalence class of \nthe equivalence relation generated by being coupled. \nThe generated equivalence classes are called \\emph{cobordism classes}. \n\\end{defn}\n\nClearly\neach $E_i$ intersects some fixed collar of $\\partial W$ as the defining sequence in (4) of Definition~\\ref{defseq}.\n The cobordism classes form a commutative group \nunder the operation ``disjoint union\". \nDenote this group by $\\mathcal B_n$.\n\nWe will show that \nfor a cobordism between arbitrary given cell-like decompositions $\\mathcal D_{1, 2}$ as in Definition~\\ref{borddecomp} if we \n take the decomposition space, then we get\n a group homomorphism \ninto the cobordism group of homology manifolds. \n\n\n\n\\section{Results}\\label{results}\n\n\n\\subsection{Computations in the concordance groups}\n\n\nWe are going to define invariants of elements of\nthe group $\\Delta_3^a$. \n With the help of these invariants, we will show that \n the group $\\Delta_3^a$ has at least uncountably many elements.\n\n\\begin{defn}\nFor a given defining sequence \n$C_0, C_1, C_2, \\ldots, C_n, \\ldots$ in $S^3$ let $$n_{C_0, C_1, C_2, \\ldots } = (n_0, n_1, n_2, \\ldots )$$ be \nthe sequence of the numbers of components of the manifolds $C_0, C_1, C_2, \\ldots$.\n\\end{defn}\n\nIf two decompositions of $S^3$ as in Definition~\\ref{concordecompS} are cylindrically related, then \nthey have defining sequences \n$C_0, C_1, C_2, \\ldots$ and $D_0, D_1, D_2, \\ldots$ such that \n$$\nn_{C_0, C_1, C_2, \\ldots } = n_{D_0, D_1, D_2, \\ldots }. \n$$\nBy \\cite[Theorem~3]{Sh68b} for \ncanonical defining sequences of an Antoine's necklace (or an Antoine decomposition) \nthe sequence $n_{C_0, C_1, C_2, \\ldots }$ \n uniquely exists (note that $C_0$ is only an unknotted solid torus which is not \n appearing in \\cite{Sh68b}).\n\n\\begin{prop}\\label{compnumbsame}\n Let $(S^3, \\mathcal D, A, C_0)$ be an admissible\ndecomposition and let\n $C_0, C_1, C_2, \\ldots$ and $D_0, D_1, D_2, \\ldots$ be\n admissible defining sequences \n for $(S^3, \\mathcal D, A, C_0 )$, where we suppose that $C_0 = D_0$. \nThen we have\n$$n_{C_0, C_1, C_2, \\ldots } = n_{D_0, D_1, D_2, \\ldots}.$$\n\\end{prop}\n\\begin{proof}\nSuppose that \n$C_0, C_1, C_2, \\ldots$ and $D_0, D_1, D_2, \\ldots$ are admissible defining sequences for \na decomposition $(S^3, \\mathcal D, A, C_0)$ such that $C_0 = D_0$. \nOf course \n$$\\cap_{n = 0}^{\\infty} C_n = A = \\cap_{n = 0}^{\\infty} D_n.$$\nWe use an algorithm applied in \\cite[Proof of Theorem~2]{Sh68b}. \nWe restrict ourselves to \none component of $C_0$ and to the components of the defining sequences in it, the following argument\n works the same way for the other components. \nWe can suppose that \n$\\partial C_1 \\cap \\partial D_1$ is a closed $1$-dimensional submanifold of $S^3$.\n Suppose some component $P$ of $\\partial C_1 \\cap \\partial D_1$\nbounds a $2$-dimensional disk $Q \\subset \\partial D_1$.\nAlso suppose that $P$ is an innermost component of $\\partial C_1 \\cap \\partial D_1$ in \n$\\partial D_1$ so $\\int Q \\cap \\partial C_1 = \\emptyset$. \nBy (4) in Definition~\\ref{adm} if $P$ does not bound a disk $Q'$ in $\\partial C_1$, then \n$P$ is not homotopic to constant in \nthe complement of $A$ but then $P$ cannot bound the disk $Q \\subset \\partial D_1$. \nHence \n$P$ bounds a disk $Q' \\subset \\partial C_1$ as well. \nThen the interior of the sphere $Q \\cup Q'$ \ndoes not intersect $A$ \nbecause of (3) in Definition~\\ref{adm}.\nSo we can modify $C_1$ by pushing $Q'$ through the sphere $Q \\cup Q'$ by a self-homeomorphism of the complement \nof $A$ \nand hence we obtain fewer circles in the new $\\partial C_1 \\cap \\partial D_1$.\nAfter repeating these steps finitely many times we obtain \na new $C_1$ such that $\\partial C_1 \\cap \\partial D_1$ contains no circles which bound disks\n on $\\partial C_1 \\cup \\partial D_1$. \n Similarly, by further adjusting $C_1$ in the complement of $A$ as written on \\cite[page~1198]{Sh68b}\n in order to eliminate the circles in $\\partial C_1 \\cap \\partial D_1$ which bound annuli\n we finally obtain a $C_1$ such that \n\n\\begin{itemize}\n\\item\nthe intersection $\\partial C_1 \\cap \\partial D_1$ is empty,\n\\item\nno component of $C_1$ is disjoint from all the components of $D_1$ and vice versa,\n\\item\neach component of $C_1$ is inside \na component of $D_1$ or it contains some components of $D_1$.\n\\end{itemize} \n\nThen we can see that there is a bijection between the \nnumber of components of $C_1$ and $D_1$ because of the following.\n\nIf a component of $C_1$ is in $\\int D_1$ and it is homotopic to constant \nin $\\int D_1$, then all the other components of $C_1$ are in the same component of $\\int D_1$ by (2) in Definition~\\ref{adm}.\nThis would result that no part of $A$ is in other components of $D_1$, which would contradict to \n(1) in Definition~\\ref{adm}\n so \nno \ncomponent of $C_1$ in $\\int D_1$ is homotopic to constant \nin $\\int D_1$. The same holds if we switch the roles of $C_1$ and $D_1$. \nThis means that \n\\begin{itemize}\n\\item\nthe winding number\nof a component $T$ of $C_1$ in the component of $D_1$ which contains $T$ is not equal to $0$ and the same holds\nfor $D_1$ and $C_1$ with opposite roles. \n\\end{itemize} \n\n\nFurthermore \nsuppose that $T$ is some component of $D_1$ and\n $T$ contains at least two components $T_1$ and $T_2$ of $C_1$.\n Then \n $T$ is linking with other component $T'$ of $D_1$ by (1) in Definition~\\ref{adm} with algebraic \n linking number non-zero. \nLet $T_3$ be a component of $C_1$ such that \n $T_3 \\subset T'$ or $T' \\subset T_3$. \nIf $T_3 \\subset T'$, then \n$T_1$ and $T_2$ are linking with $T_3$ with algebraic linking number non-zero.\nIf $T' \\subset T_3$, then \n$T$ is not in $T_3$ because for example \n$T_1$ cannot be in $T_3$. \nBut then $T$ is linking with $T_3$ with algebraic linking number non-zero since the same holds for $T$ and $T'$. \n So again we obtained that $T_1$ and $T_2$ are linking with $T_3$ with linking number non-zero.\nNow, there is a $T''$ component of $D_1$\n which is linking with $T'$ with linking number non-zero and which is disjoint from all the previously mentioned tori \n ($T', T'' \\subset T_3$ is impossible because then both of $T', T''$ are linking with $T$ and also with each other \n and this contradicts to (1) in Definition~\\ref{adm}).\n Let $T_4$ be a component of $C_1$ such that \n $T_4 \\subset T''$ or $T'' \\subset T_4$. There are a number of cases to check. \n If $T_3 \\subset T'$\n and $T_4 \\subset T''$, then \n $T_3$ is linking with $T_4$. \n If $T_3 \\subset T'$ but $T'' \\subset T_4$, then \n since $T_4$ cannot contain \n $T$ or $T'$, we have again that \n $T_3$ is linking with $T_4$. \n Finally, if $T' \\subset T_3$, then since $T''$ cannot be in $T_3$, we have that \n $T_4 \\subset T''$ implies that $T_3$ and $T_4$ are linking and\n $T'' \\subset T_4$ implies that \n since $T_4$ is disjoint from all the other tori, again $T_4$ is linking with $T_3$.\nSo we obtain that \n$T_1$ and $T_2$ are linking with $T_3$\n and $T_3$ is linking with $T_4$\n resulting that $T_3$ is linking with three other components of $C_1$ which contradicts to \n(1) in Definition~\\ref{adm}.\nSummarizing, we obtained the following.\n\\begin{itemize}\n\\item\nThe intersection $\\partial C_1 \\cap \\partial D_1$ is empty,\n\\item\nno component of $C_1$ is disjoint from all the components of $D_1$ and vice versa,\n\\item\nevery component of $C_1$ contains one component of $D_1$ or is contained in one component of $D_1$, \n\\item\nno component of $C_1$ contains more than one component of $D_1$ and vice versa.\n\\end{itemize}\nAll of these imply that \nthe number of components of $C_1$ is equal to the number of components of $D_1$. \nWe repeat the same line of arguments for the components of $C_2$ and $D_2$ lying in each component of $C_1$ or $D_1$ separately, \nwhere we perform the previous algorithm in the larger component which contains the smaller one, \n and so on, in this way we get the result.\n\\end{proof}\n\n\\begin{rem}\nIf in (1) in Definition~\\ref{adm}\nwe require having splitting number greater than $0$ instead of having \nalgebraic linking number non-zero, then \nthe previous arguments could be repeated \nto get a similar result \nif we could prove that \nhaving two solid tori with splitting number greater than $0$ \nand embedding one circle into each of these tori \nwith non-zero winding numbers \nresults that the splitting number of these two knots is greater than $0$. \nFor similar results about knots and their unknotting numbers, see \\cite{ST88, HLP22}. \n\\end{rem}\n\n\nIt follows that if two admissible decompositions of $S^3$ are\nin the same equivalence class \nof \nthe equivalence relation generated by being cylindrically related, \nthen they determine the same sequence of numbers of components. \nSo if we define the operation \n$$\n(n_0, n_1, \\ldots ) + (m_0, m_1, \\ldots ) = (n_0 + m_0, n_1 + m_1, \\ldots )\n$$\non the set of sequences, then \nthe induced map \n$$\n[ ( S^3, \\mathcal D, A, C_0 ) ] \\mapsto n_{C_0, C_1, C_2, \\ldots},$$\nwhere $C_0, C_1, C_2, \\ldots$ is some admissible defining sequence, \nis a monoid homomorphism. \n\n\\begin{defn}\nFor an equivalence class $x$ represented by \n the admissible decomposition $( S^3, \\mathcal D, A, C_0 )$ and for its admissible defining sequence\n $C_0, C_1, \\ldots$ \nlet \n$$\nL(x) = (l_1, l_2, \\ldots)$$\nbe the sequence of numbers mod $2$ \nof the components of $C_m$ which have non-zero algebraic linking number with some other component of $C_m$.\n\\end{defn}\n\n\\begin{lem}\nThe map $L$ is well-defined i.e.\\ \nadmissible decompositions being concordant through finitely many\ncylindrically related admissible decompositions have the same value of $L$.\n\\end{lem}\n\\begin{proof}\nIf decompositions with defining sequences $C_0, C_1, \\ldots$ and \n$D_0, D_1, \\ldots$ \nare cylindrically related, then for every $m \\geq 0$ \n the pairs of components of $C_m$ and the pairs of corresponding components of $D_m$ \n have the same algebraic linking numbers. Suppose for a decomposition there are two \n admissible defining sequences \n$C_0, C_1, \\ldots$ and \n$D_0, D_1, \\ldots$ \n such that $C_0 = D_0$, we have to show that the linking numbers are equal to $0$ simultanously for both of them (for the components of\n $C_0$ and $D_0$ this is obviously true). Of course we know\n that the components are in bijection with each other \n by the proof of Proposition~\\ref{compnumbsame}\n and in every component of $C_0$\n after some deformation\n we have that \n \\begin{itemize}\n\\item\nthe intersection $\\partial C_1 \\cap \\partial D_1$ is empty,\n\\item\nno component of $C_1$ is disjoint from all the components of $D_1$ and vice versa,\n\\item\nevery component of $C_1$ contains one component of $D_1$ or is contained in one component of $D_1$, \n\\item\nno component of $C_1$ contains more than one component of $D_1$ and vice versa.\n\\end{itemize}\nIf a component $T$ of $C_1$ is linked with a component $T'$ of $C_1$\nwith linking number $0$, then \nany knot in $T'$ is linked with $T$ with linking number $0$.\nAlso, if a knot in $T'$ is linked with $T$ with linking number $0$, then \n$T$ and $T'$ are linked with linking number $0$.\nFor every $m \\geq 1$ \nafter a finite number of iterations\nof the algorithm in the proof of Proposition~\\ref{compnumbsame} we get the result.\n\\end{proof}\n\n\n\nOf course \nthe map $L$\nis a monoid homomorphism moreover \nfor a class $x$ represented by a slice decomposition \nwe have $L(x) = (0, 0, \\ldots)$.\n Also, for a class $x$ of the form $[(S^n, \\mathcal D, A)] + [(-S^n, \\mathcal D, A)]$\nwe have $L(x) = (0, 0, \\ldots)$ since all the linking components appear twice.\n\n\n\n\\begin{defn}\nWe call the function \n$$\n\\nu \\co \\Delta_3^a \\to \\mathbb Z_2^{\\mathbb N}$$\nobtained by \n$\\nu ( [x] ) = L(x)$ \n the \\emph{mod $2$ component number sequence} of the elements of $\\Delta_3^a$. \n\\end{defn}\n\n\n\\begin{thm}\\label{uncount1}\nThere are at least uncountably many different elements in the concordance group $\\Delta^a_3$.\nThese can be represented by Antoine decompositions.\n\n\\end{thm}\n\\begin{proof}\nFor every element $(l_0, l_1, \\ldots) \\in \\mathbb Z_2^{\\mathbb N}$, where\n $l_0 = 0$,\n we have an Antoine decomposition representing a class $x$ such that \n $\\nu([x]) = (l_0, l_1, \\ldots)$.\n Hence \nwe get uncountably many different classes in the concordance group.\n\\end{proof}\n\n\n\n\n\n\n\n\\subsection{Computations in the cobordism group}\\label{computebord}\n\n\n\n\n\n\n\n\\begin{prop}\\label{manifcob}\nSuppose that $n \\geq 0$ and $M$ is a closed manifold. \nA closed $n$-dimensional homology manifold $N$\nhaving a resolution $M \\to N$ \nis cobordant in $\\mathfrak N_n^E$ to $M$.\n\\end{prop}\n\\begin{proof}\nTake $M \\times [0,1]$ and consider the cell-like decomposition $\\mathcal D$ of $M$ which results the homology manifold $N$.\n If $\\mathcal S(X)$ denotes the collection of singletons in a space $X$, \nthen\n$\\mathcal D \\times \\mathcal S([0, 1\/2])$ union $\\mathcal S( M \\times (1\/2, 1] )$ \n is a cell-like decomposition of $M \\times [0,1]$, denote it by $\\mathcal E$.\nWe have to show that \nthe quotient space \n$$M \\times [0,1] \/ \\mathcal E$$\nis an $(n+1)$-dimensional homology manifold with boundary homology manifolds\n$N$ and $M$.\nTake the closed manifold $$M \\times [0,1] \\cup_{\\varphi} M \\times [0,1],$$\nwhere\n$\\varphi \\co \\partial ( M \\times [0,1]) \\to \\partial (M \\times [0,1])$\nis the identity map.\nSince $M\/ \\mathcal D$ is $n$-dimensional, \nthe doubling of the decomposition $\\mathcal E$ on \n$M \\times [0,1] \\cup_{\\varphi} M \\times [0,1]$\nyields\na finite dimensional quotient space, we get this by using \nestimations for the covering dimension, see \\cite{HW41} and \\cite[Corollary~2.4A]{Da86}.\nSo the decomposition space $P$ obtained by factorizing \n$M \\times [0,1] \\cup_{\\varphi} M \\times [0,1]$\nby the double of $\\mathcal E$ is a closed finite dimensional homology manifold by \\cite[Proposition~8.5.1]{DV09}. \nSince this space \nhas an open set homeomorphic to $\\mathbb R^{n+1}$, it is $(n+1)$-dimensional.\nWe obtain the space $M \\times [0,1] \/ \\mathcal E$\nby cutting $P$ into two pieces along two subsets homeomorphic to \n$M$ and $N$. This means that $M$ and $N$ are cobordant in $\\mathfrak N_n^E$.\n\\end{proof}\n\n\nSo if every $3$-dimensional homology manifold is resolvable, then \n$\\mathfrak N_3^E = 0$.\nAlso note that the decomposition space $S^3 \/ \\mathcal W$ of the Whitehead decomposition $\\mathcal W$\nis a null-cobordant $3$-dimensional homology manifold, because $[S^3] = 0$.\n\n\n\n\\begin{prop}\\label{cobsubgroup}\nFor $n \\geq 4$ the cobordism group $\\mathfrak N_n$ is a subgroup of $\\mathfrak N_n^E$.\n\\end{prop}\n\\begin{proof}\nLet $M_{1}$ and $M_2$ be closed manifolds. \nIf the two cobordism classes $[M_1]$ and $[M_2]$ in \n$\\mathfrak N_n^E$ coincide, then \nsince $M_i$ are manifolds, we have $i(M_i)=1$ hence \na cobordism in $\\mathfrak N_n^E$ between $M_1$ and $M_2$ also has index $1$ so\nthis cobordism is resolvable. \nBy \\cite[Theorem~1.1]{Qu83} and Lemma~\\ref{resolcob} there is a manifold cobordism between \n$M_1$ and $M_2$.\n\\end{proof}\n\n\n\n\n\n In Definition~\\ref{borddecomp} for $i = 1, 2$ the space $M_i\/\\mathcal D_i$ is an $n$-dimensional ENR homology manifold and \n$W \/ \\mathcal E$ is an $(n+1)$-dimensional ENR homology manifold if we add the appropriate collars by Lemma~\\ref{decomphomolmanif}. \n If $(M, \\mathcal D)$ is such a cell-like decomposition, then \nwe can assign the cobordism class of the decomposition space $M \/ \\mathcal D$\nto the cobordism class of $(M, \\mathcal D)$. \nThis map \n$$\\beta_n \\co \\mathcal B_n \\to \\mathfrak N_n^E$$\n$$ [(M, \\mathcal D) ] \\mapsto [M \/\\mathcal D]$$\nis a group homomorphism.\nThe image of $\\beta_n$\ncontains the classes represented by \ntopological manifolds since trivial decompositions always exist\nand it contains also the classes represented by homology manifolds having appropriate resolutions.\nFor $n = 1, 2$ all the homology manifolds are topological manifolds \\cite{Wi79}\n so the homomorphism $\\beta_n$ is \nsurjective. \nTake the natural forgetting homomorphism \n$$\nF_n \\co \\mathcal B_n \\to \\mathfrak N_n$$\n$$\n[(M, \\mathcal D) ] \\to [M].$$\nFor every $n \\geq 0$ the diagram \n\\begin{center}\n\\begin{graph}(6,2)\n\\graphlinecolour{1}\\grapharrowtype{2}\n\\textnode {A}(0.5,1.5){$\\mathcal B_n$}\n\\textnode {X}(5.5, 1.5){$\\mathfrak N_n^E$}\n\\textnode {U}(3, 0){$\\mathfrak N_n$}\n\\diredge {A}{U}[\\graphlinecolour{0}]\n\\diredge {U}{X}[\\graphlinecolour{0}]\n\\diredge {A}{X}[\\graphlinecolour{0}]\n\\freetext (3,1.2){$\\beta_n$}\n\\freetext (1.2, 0.6){$F_n$}\n\\freetext (4.6, 0.6){$\\varphi_n$}\n\\end{graph}\n\\end{center}\nis commutative by Proposition~\\ref{manifcob}, where $\\varphi_n$ is the natural map assigning the cobordism class \n$[M] \\in \\mathfrak N_n^E$ to the cobordism class $[M] \\in \\mathfrak N_n$.\n\n\\begin{prop}\\label{deltaimage}\nFor every $n \\geq 0$ \n the image of $\\beta_n$ is equal to the \nsubgroup of $\\mathfrak N_n^E$ generated by the cobordism classes of topological \nmanifolds.\n\\end{prop}\n\\begin{proof}\nThe statement follows from the fact that\n$F_n$ is surjective. \n\\end{proof}\n\n\n\\begin{prop}\\label{bordkep}\nFor $n \\geq 1$, we have $\\beta_n(\\mathcal B_n ) = \\mathfrak N_n$ in $\\mathfrak N_n^E$.\n\\end{prop}\n\\begin{proof}\nBy Proposition~\\ref{cobsubgroup} and Proposition~\\ref{deltaimage} \n we have $\\beta_n(\\mathcal B_n ) = \\mathfrak N_n$ for $n \\geq 4$.\nFor $n = 3$, since $\\mathfrak N_3 = 0$, \nthe statement also holds. For $n = 2$,\nThe group $\\mathfrak N_2$ is isomorphic to $\\mathbb Z_2$\nso by Proposition~\\ref{deltaimage} it is enough to show that \n$\\beta_2(\\mathcal B_2 ) = \\mathbb Z_2$. But\n$[{\\mathbb R}{P}^2]$ is not null-cobordant in $\\mathfrak N_2^E$\nbecause ${\\mathbb R}{P}^2$ has a non-zero characteristic number as a smooth or topological manifold and then \nby \\cite{BH91} it cannot be null-cobordant.\nFor $n = 1$, of course $\\mathfrak N_1^E = \\mathfrak N_1 = 0$.\n\\end{proof}\n\n\n\\begin{rem}\nInstead of cell-like decompositions, which result homology manifolds, it would be possible to study \ndecompositions which are just homologically acyclic and nearly $1$-movable, see \\cite{DW83}. These result homology manifolds as well.\n Without being nearly $1$-movable, these can result non-ANR homology manifolds. \n\\end{rem}\n\n\n\n\nAs we could see, the class $\\beta_n([(M, \\mathcal D)]) = [M\/\\mathcal D] \\in \\mathfrak N_n$ \ncould not expose a lot of things about the decomposition $\\mathcal D$.\nIf we add more details to the homology manifolds and their cobordisms,\n then we could obtain a finer invariant of the cobordism group of decompositions. \n Recall that \n the singular set \n of a homology manifold is the set of non-manifold points, which is a closed set.\n \n\n\\begin{defn}[$0$- and $1$-singular homology manifolds]\\label{0-1-homolmanif}\nA homology manifold is \\emph{$0$-singular} if its singular set is a \n$0$-dimensional set.\nA compact homology manifold with collared boundary is \\emph{$1$-singular} if its singular set $S$\n consists of properly embedded arcs such that $S$ is a direct product \n in the collar. \nThe closed $n$-dimensional $0$-singular \nhomology manifolds $X_1$ and $X_2$ are \\emph{cobordant} \nif there exists a compact $(n+1)$-dimensional \\emph{$1$-singular} homology manifold $W$ such that \n$\\partial W$ is homeomorphic to the disjoint union of $X_1$ and $X_2$ and\n $\\partial W \\cap S$ coincides with the singular set of $X_1 \\sqcup X_2$ under this homeomorphism.\n The set of (oriented) cobordism classes is denoted by \n$\\mathfrak N_n^S$ (and $\\Omega_n^S$).\n\\end{defn}\n\nThe set of cobordism classes\n$\\mathfrak N_n^S$ and $\\Omega_n^S$ are groups with the disjoint union as group operation. \nDenote by $\\mathfrak M_n^0$ the cobordism group of $0$-singular manifolds where the cobordisms are arbitrary but \nthe singular set of the cobordisms is not the entire manifold.\n\n\n\nNote that the representatives of the classes in $\\beta_n(\\mathcal B_n)$ are $0$-singular \nand the cobordisms between them \nhave not only singular points because the boundary has not only singular points since the singular set is a compact $0$-dimensional set. \nThere are natural homomorphisms\n$$\n i_n' \\co \\mathcal B_n' \\to \\mathfrak N_n^S \\mbox{, \\ \\ \\ \\ \\ }i_n \\co \\mathcal B_n \\to \\mathfrak M_n^0 \\mbox{\\ \\ \\ \\ \\ and\\ \\ \\ \\ \\ } \\mathcal B'_n \\to \\mathcal B_n\n $$\n where $ \\mathcal B_n'$ is the version of $\\mathcal B_n$ yielding $0$-singular spaces\n and $1$-singular cobordisms, there is \n the forgetful map \n$$\n\\varphi_n \\co \\mathfrak N_n^S \\to \\mathfrak M_n^0$$\nand then the diagram \n\\begin{equation*}\n\\begin{CD}\n \\mathcal B'_n @> i_n' >> \\mathfrak N_n^S \\\\\n @VVV @VV \\varphi_n V \\\\\n\\mathcal B_n @> i_n >> \\mathfrak M_n^0 @> \\psi_n >> \\mathfrak N_n^E \n\\end{CD}\n\\end{equation*}\ncommutes.\nObserve that $\\psi_n$ is injective, $\\varphi_n$ is surjective and since $\\beta_n ( \\mathcal B_n ) = \\psi_n \\circ i_n ( \\mathcal B_n )= \\mathfrak N_n$, \nthe image $ i_n'( \\mathcal B_n' )$ is in $\\varphi^{-1}_n \\circ \\psi^{-1}_n( \\mathfrak N_n )$, which could be a larger group\nthan $\\mathfrak N_n$. \n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\n\n\n\n\n\n\\section*{Acknowledgements}\nThis work is partially supported by NSF CREST Grant HRD-1736209, NSF grant CNS-1423481, and DoD ARL Grant W911NF-15-1-0518.\n\n\\section*{Acknowledgements}\nThis research is partially supported by NSF Grants CNS-1423481, CNS-1538418, DoD ARL Grant W911NF-15-1-0518.\n\\clearpage \\fi\n\\bibliographystyle{IEEEtran}\n\n\\section{Implementation in AWS}\\label{sec_imp}\nIn this section we present a proof of concept implementation of AB-ITS model in Amazon Web Services (AWS) \\cite{aws}. We use AWS IoT service along with AWS Greengrass \\cite{grass} (to provide edge functionality) to setup a realistic environment where vehicles are simulated as AWS IoT things. In particular, these stand alone services are implemented as a Lambda function \\cite{lambda} using Boto \\cite{boto} which is AWS SDK for Python. It should be noted that in this implementation no long term GPS data coordinates of vehicles are collected in cloudlets. This reduces privacy concerns of end users and encourages adoption of the proposed model.\n\n\\subsection{Use Cases Overview}\nUS-DOT has proposed an extensive list of ITS applications \\cite{its-app} which we have used to create our real life connect use-cases. Our implementation addresses trust, security and privacy issues concerning end users which must be satisfied before bringing ITS technology in practice. As most applications are safety related, we have considered accident and ice-on-road (tire slip) alerts as our running use-case along with real-time detection and prevention of rogue (or malicious) vehicles on road. In the use-cases, we have also shown how different entities (S, TC, {$\\mathrm{V_T}$}~etc.) fit in the formal model definitions.\n\n\\textbf{Accidental Safety and Ice-Threat :} Moving vehicles (S) can generate warnings for other vehicles ({$\\mathrm{V_T}$}) in their surrounding based on an event which they sensed or encountered. In our use-case, we consider `ice-threat' alerts based on a tire slip wherein vehicles are notified a warning, if any nearby vehicle `feels' it and broadcasts, after satisfying security policies implemented at the edge infrastructures (TC). These policies take into account: who is the source of alert, location of vehicle (ATT) and how many other vehicles encountered similar event, before forwarding (OP) these alerts to other nearby approaching vehicles ({$\\mathrm{V_T}$}). It is possible that a single vehicle (S) sends an ice-threat alert to associated cloudlet (TC), while other vehicles in the area sense no such movement. Therefore the edge will be able to filter such malfunctioning or deliberate malicious attempt from the vehicle and also notify law enforcement and put that vehicle in rogue vehicles list. Further, in case of an accident, alert messages will be generated and sent only to police or medical vehicles in the area. Based on the type of alerts and who generates it, policies are defined in the system to ensure trusted, anonymized and relevant notifications.\n\\input{Figures\/comp}\n\\input{Tables\/table_conf1}\n\n\\textbf{Compromised Rogue Vehicles :} Rogue vehicle either intentionally or due to sensor failure can send fake messages to other vehicles. Misbehaving and compromised vehicles must be detected in smart transportation and alerts must be issued immediately to discard the information sent by them. In our use-case, central cloud authority (S) informs edge infrastructures (TC) with a list of detected rogue vehicles and when any message is received by an edge from these vehicles, it is not forwarded to other vehicles. Further, law enforcement is informed about the location (ATT) of a rogue vehicle to prevent fake message dissemination. This approach prevents the need to update and publish revocation list to all vehicles eliminating the bandwidth and connectivity issues.\n\n\\input{Figures\/Policy}\n\n\n\\subsection{Proof of Concept}\nWe will first go over the system configuration along with implemented security policies defined in the cloudlet before we delve into more details of our developed prototype.\n\n\\textbf{System Architecture :} Figure \\ref{fig-comp} represents system architecture along with different components implemented for our prototype. All the vehicles and static smart entities including edge infrastructures must be registered with a central cloud controller to ensure trusted authorized participating entities. Further, the controller also helps in the administrative phase (discussed later) which includes providing a list of edge infrastructures on designated path of moving vehicle. Once the registration is done and vehicles are sent a list of edge infrastructures, the vehicles publish and subscribe to secure (and reserved) MQTT topics created in each of cloudlets which get dynamically assigned based on vehicle current GPS coordinates.\nIt is also possible that the moving vehicle keeps on sending coordinates to the cloud and the controller lets them know the IP address of the nearby edge infrastructures to which the vehicle has to associate.\nThese cloudlets (represented as AWS Greengrass) hold the implemented security policies, a lambda function (similar to policy decision point - PDP \\cite{hu2013guide}) for policy evaluation and the policy enforcement point (PEP) to check messages received, anonymize and filter them and based on the type of alert send them to relevant entities. It should be noted that only alert messages go through the enforcement point, whereas no alerts messages are discarded after logging. Table \\ref{tab-2} lists different AWS system parameters to provide a better understanding of performance metrics shown later in this section.\n\n\\textbf{Security Policies :} We defined attributes based policies which are enforced at the edge, to check who is allowed to send messages, conditions when the message is forwarded to other vehicles and who are authorized recipients for different types of alerts in the system.\n\\input{Figures\/gps}\n\\input{Figures\/seq}\nVarious attributes can be included in policy but for the sake of simplicity we used only vehicle type to determine the source and destination of messages. As shown in Figure \\ref{fig-policy}, security policies are listed in JSON format, where three types of alerts are being generated, `TireSlip', `Accident' and `Rogue' vehicle updates, as denoted by red rectangular boxes.\nWe defined separate set of conditions for each alert type. For example, in `TireSlip' alerts, it is first checked if it is generated (`Source' attribute) by a regular vehicle (specified by attribute value `Vehicle') or by law enforcement (`Police' or `Medical'). Policy then checks number of vehicles which created similar alerts (specified by \"Number\" attribute). Notification to other vehicles depends on how many alerts were generated or who is the source of alert. If the number of alerts are greater than or equal to 2 from regular vehicles, or even a single alert from police or medical vehicle, \"Ice-threat High\" notifications are sent to other associated vehicles of the cloudlet. However, if an alert is generated by one regular vehicle, \"Ice Threat - Low\" is sent for all member vehicles.\nIt must be noted that the sender vehicles and the receiving vehicle must be associated with the same cloudlet to exchange notifications, which also ensure relevance of alerts being received. Similarly, for accident use case, notification is only sent to nearby police vehicles and medical with assistance message. Here the source is not defined, since any smart entity including vehicle, or nearby smart road side sensor or a pedestrian can send message to police or medical vehicles. It is also possible that information about the vehicle including color, license plate number or other identifying information can be sent to law enforcement.\n Another important use case is to enable a central law enforcement that can regularly publish and update the list of rogue vehicles. This list for example, could help locate vehicles that have been stolen or implicated in amber alert\n In the last part of our policy for `Rogue', vehicle IDs \\texttt{Car-X, Car-Y, Vehicle-Z} are stated as rogue and any message from these vehicles is not forwarded. This is a dynamic policy as the list is periodically updated by a central authority. Also to extend the use-case, it is possible when an edge receives a message from a rogue vehicle, it can forward that information to nearby police along with vehicle information like license number and color. The defined policies are only for alert messages, and other `no alerts' messages are just checked by the policy and are logged and dropped without forwarding to any vehicle. Note that policies can also be implemented inside the smart vehicle as well to provide user privacy preference aware notifications, but are not implemented in our prototype\n\n\\textbf{Implementation Details :} The implementation of our proposed solution involves two steps: the administrative phase and the operational phase. Administrative phase includes setup of cloudlets by city administration, setting up the boundaries for each cloudlet, dynamic assignment of moving vehicles to edge infrastructures, and attributes and alerts inheritance from edges to the member vehicles. To be part of ITS, vehicles and smart infrastructures need to have one time registration with central cloud which ensures that smart entities are trusted and benign.\nOnce registered, the moving vehicles can be provided with a mapped list of edges which will arrive in their designated route to which they are allowed to connect. As the vehicles get dynamically associated to different cloudlets, they are able to publish and subscribe to the reserved topics on each edge infrastructures. The operational phase consists of how these attributes and assignment to cloudlets ensure the relevance of alerts to the vehicles and how the edge deployed security policies are used to mitigate security and privacy concerns of users who are using AB-ITS system.\n\n\n\nIn our prototype, we demarcated a big geographic location area into several smaller regions and each region has a trusted cloudlet (TC) which serves all the smart entities in the region as shown in Figure \\ref{fig gps}. We used a python script to simulate the movement of vehicles in the system, shown as green dots, which sends MQTT messages containing GPS coordinates to a central cloud. Service in cloud determines which edge cloudlets are in the surrounding area of the vehicle and then assigns the vehicle to the nearby cloudlets. Following is the sample MQTT payload sent by a moving vehicle to its shadow reserved topic \\texttt{\\$aws\/things\/`Vehicle-Name'\/shadow\/update} in the cloud for dynamic cloudlet assignment:\n\n \\begin{itemize}[leftmargin=*]\n \\item[] \\texttt{\\{\"state\": \\{\"reported\": }\n \\item[] \\texttt{\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; \\{\"Latitude\": \"28.1452683\",}\n \\item[] \\texttt{ \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; \"Longitude\":\"-97.567259\"\\}\\}\\}}\n\\end{itemize}\n\\iffalse\n\\begin{itemize}[leftmargin=*]\n \\item[] \\texttt{\\{\"state\": \\{\"reported\": }\n \\item[] \\texttt{\\{\"Latitude\": \"28.14526\",\"Longitude\":\"-97.5672\"\\}\\}\\}}\n\n\\end{itemize}\n\\fi\nAs the path of vehicle is mostly known, these edge assignments can be pro-active in nature as well, mitigating the concern of cloud latency. In such a case, the cloud controller can send a list of edge infrastructures which will be on the designated path of the vehicle to get them associated when vehicles come in their range. It is also possible that these cloudlets have a wireless range and the vehicles which are in the range get automatically assigned to these cloudlets. A vehicle can associate to multiple cloudlets at a time based on their overlapping location. In Figure \\ref{fig gps}, static smart objects like stop warning signs, road work ahead or other infrastructures have fixed allocation to cloudlets, and the dotted lines represent predicted future cloudlets of vehicle along with current cloudlets by solid pink lines.\n\n\nOnce vehicles get assigned to nearby cloudlets, operational phase starts where the vehicles send messages to its shadow reserved topic (which gets created when the vehicle becomes member of a cloudlet) in their associated edges, which enforce security policies to ensure trusted and authorized alerts to nearby vehicles in near real time manner. In all the policies defined, privacy of the sender is well preserved as the messages do not contain any personal identifiable information and are anonymous. Following is a sample MQTT message sent by vehicle:\n \\begin{itemize}[leftmargin=*]\n \\item[] \\texttt{\\{\"state\":\\{\"reported\": }\n \\item[] \\texttt{\\;\\; \\{\"Longitude\": \"29.472741982\", }\n \\item[] \\texttt{\\;\\;\\;\\; \"Latitude\": \"-98.50038363\", }\n \\item[] \\texttt{\\;\\;\\;\\; \"Time\": \"2019-03-19 11:27:40.237734\", }\n \\item[] \\texttt{\\;\\;\\;\\; \"Velocity\": \"30\", \"Direction\": \"north\", }\n \\item[] \\texttt{\\;\\;\\;\\;\\;\\;\"Elevation\": \"650\", \"Posit. Accuracy\":}\n \\item[] \\texttt{\\;\\;\\;\\; \"5\", \"Steering Wheel Angle\": \"0\",}\n \\item[] \\texttt{\\;\\;\\;\\; \"Alert\": myAlert\\}\\}\\}}\n\\end{itemize}\nIn this message, beside BSM \\cite{bsm} attributes, an attribute \"Alert\" also exists, which defines what kind of alert has been sent from the vehicle to cloudlets. For our use-cases, it can be an \"Accident\", \"Tireslip\", or \"Null\" value where Null signifies no alert. Once the message is received by cloudlet, and is checked against the policies, the edge infrastructure forwards the following Tireslip alert message to a generic topic \\texttt{test\/devices} to which the vehicles subscribe when they become member of the edge.\n \\begin{itemize}[leftmargin=*]\n\\item[] \\texttt{\\{\"message\": \"Ice Threat - Low', }\n\\item[] \\texttt{ 'myEvent': '2019-03-19 10:56:15.921834'\"\\}}\n\\end{itemize}\nIn case of accident alert following message:\n\\begin{itemize}[leftmargin=*]\n\\item[] \\texttt{\\{\"message\":\"Accident- Require Assistance',}\n\\item[] \\texttt{ 'myEvent': '2019-03-19 11:27:40.237734'\"\\}}\n\\end{itemize}\nis sent to topic \\texttt{test\/medical} and \\texttt{test\/police} to which nearby medical and police vehicles are subscribed respectively. Note that event time has also been added to messages, to ensure when the message is not obsolete. Similarly, for updating the rogue vehicle list from the transportation authority via central cloud to the edge infrastructures, message\n\\begin{itemize}[leftmargin=*]\n\\item[] \\texttt{\\{\"Alert\": myAlert, \"myVehicle\": myVehicle\\}}\n\\end{itemize}\nis sent to \\texttt{test\/Rogue-Vehicle} topic. In this message, 'myAlert' variable can be \\texttt{ADD}, \\texttt{DELETE} or \\texttt{LIST} operation, and 'myVehicle' can hold the vehicles to be added or deleted. In case of list operation, 'myVehicle' attribute value is NULL.\nThe complete sequence of events for the administrative and operational phase in cloudlet supported ITS is shown in Figure \\ref{fig_seq}.\n\\input{Figures\/perform}\n\\iffalse\n\\paragraph{Edge Cloudlet Allocation}\n\n\\paragraph{Dynamic Policy Update}\n\n\\paragraph{Secure and Trusted Communication}\n\\fi\n\\subsection{Performance Metrics and Discussion}\n\nWe evaluated the performance of our proposed AB-ITS model in AWS and provide metrics for the use-cases in proof of concept. We first calculate the execution time for the proposed policy enforcer to evaluate the attribute based security polices (shown in Figure \\ref{fig-policy}) against the number of vehicles associated with a cloudlet and scaling the number of messages sent per vehicle per second. In Figure \\ref{fig-perform} (a) and (b), as the number of vehicles increase (along x axis) with more messages being sent, the enforcer takes more time to evaluate the polices and impact performance. This enforced policy engine in cloudlet has the worse case execution time less than 200 microseconds, for any number of messages sent per second (from 1 to 20) by vehicles which could range from 1 to 50. In case of no-alerts, this execution time will be zero as the policies will not be evaluated.\nTotal trip time performance of our model includes time at which vehicle generates an alert till it is received by target vehicles which includes the policy evaluation time. As shown in Table \\ref{tab-acc} and \\ref{tab-tire}, the total trip time is within the permissible limits ($\\sim$100 ms \\cite{Xu:2004:VSM:1023875.1023879}) for most of the case scenarios. However, the trip time goes beyond the limits when 50 vehicles get associated to single edge cloudlet at one time. The variation in total trip time is due to network traffic and latency, but the average and standard deviation infer that the performance is very comparable to peer to peer ITS. It should be noted that the extra overhead induced by policy execution (in microseconds) is very negligible as compared to the total trip time (in milliseconds). In our approach MQTT protocol has been used, therefore, if some one does not want use DSRC due to cost of transmitter and receiver, our approach can still work with the traditional IoT MQTT based communication based on LTE, 5G or WiFi connectivity.\n\\input{Tables\/perform}\nWe understand that there may be hundreds of vehicles during heavy traffic time, therefore, to scale the system and accommodate all vehicles we can install more cloudlets and infrastructure devices in busy areas that will reduce the number of vehicles which will get associated with single cloudlet at a time. This implementation in AWS showcases the practical viability and use of fine grained polices in context of intelligent transportation system, without the need to capture data points from real world traffic. It must be also noted that, AWS Greengrass has limit of 200 devices per Greengrasss group, which means maximum number of vehicles which can be associated can not be more than 200. We can add more cloudlets in the system which can cater to higher population of vehicles and smart entities.\nAs mentioned earlier, this proposed cloudlet supported V2V and V2I complements the current DSRC approach and is not considered a replacement.\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction and Motivation}\\label{sec_intro}}\n\\IEEEPARstart{F}{uture} smart world will be equipped with technologies and autonomous devices which collaborate among themselves with minimal human interference. Automotive industry is one of the front runners that has quickly embraced this technological change. Connected vehicles (CVs) and smart cars have been introduced, with a plethora of on-board sensors and applications with internet connectivity to offer safety and comfort services to users. Intelligent transportation for smart cities envision moving entities interacting and exchanging information with other vehicles, infrastructures or on-road pedestrians. Federal and private agencies are defining communication standards and technologies for Intelligent Transportation System (ITS) to ensure safety, and address security and privacy concerns of end users.\n\nVehicle to Vehicle (V2V) and Vehicle to Infrastructure (V2I) are two proposed technological innovations which can change current transportation. V2V will enable vehicles to exchange information about speed, location, position, direction, or brake status with other surrounding vehicles where receiving vehicles will aggregate these messages and make smart decisions using on-board applications which will warn drivers about accidents, over-speed, slow traffic ahead, aggressive driver, blind spot or a road hazard. V2I will enable road side units (RSUs) or traffic infrastructures to transmit information about bridge permissible height, merging traffic, work zone warning or road hazard detection to complement V2V applications. Vehicle to pedestrian (V2P) is also envisioned to cater to pedestrians, such as with visual or physical impairments, and send corresponding alerts to approaching vehicles. These communication technologies will use Dedicated Short Range Communications (DSRC) \\cite{dsrc} to exchange data packets, called Basic Safety Messages (BSMs) \\cite{bsm}, with nearby vehicles and entities between 300-500 meters range. Messages will be sent up to 10 times per second providing a 360-degree view of proximity, with on-board applications using the information for triggering alerts and warnings. US Department of Transportation (DOT) and National Highway Traffic Safety Administration (NHTSA) estimate around 80\\% of non-impaired collisions \\cite{us-dot,us-dot-1} and 6.9 billions traffic hours can be reduced by using V2V, V2I and V2P communications.\n\nVehicles in ITS are communicating and exchanging information with external entities including toll booths, gas stations, parking lots, and other vehicles, which raises security and privacy issues. Incidents on Jeep and Tesla \\cite{jeep,tesla} have been demonstrated where car engine was shut and steering wheel controlled remotely by adversaries. These smart cars are equipped with 100's of electronic control units (ECUs) and more than 100 million lines of code, thereby, exposing broad attack surface for critical car systems including transmission control, air-bag, telematics, engine or infotainment systems. In-vehicle controller area network (CAN) bus also needs security to prevent unauthorized data exchange and tampering among ECUs and software manipulation. Cyber attacks on smart connected vehicles \\cite{gao,nhtsa-1,nhtsa-2,elmaghraby2014cyber} include: unauthorized over the air updates (OTA) for firmware, stealing user private data, spoofing sensors, coordinated attacks on road side infrastructure or malware injection. Dynamic and mobile nature of V2X (Vehicle to everything) communication makes it additionally difficult to secure the distributed system where vehicles will be exchanging data with random unknown entities on road. Impersonation and fake message from malicious compromised vehicles is also a grave concern as the information exchanged is used by other vehicles to make alerts and notifications. Vehicle users also have privacy concerns where every movement can be tracked continuously by agencies or data collected from vehicles can be used to extrapolate personal identifiable information (PII). These concerns lead to reluctance in embracing these future transportation technologies.\n\nAttribute-based access control (ABAC) \\cite{jin2012unified,hu2013guide,gupta2016mathrm} provides fine grained authorization capabilities for resources in a system. This mechanism offers flexibility in a distributed multi-entity dynamic environment where the attributes of entities along with contextual information are used to make access and communication authorization decisions. Intelligent transportation system involves interaction and messages exchange among entities with no prior association. Attributes of vehicles or transportation infrastructure can be used to authorize communication decision based on their current location, ownership or degree of trust. Such security mechanisms can help to prevent fake messages, stop rogue vehicles and ensure privacy aware message communication besides ensuring location and time sensitive relevance of exchanged information.\n\nIn this work, we present a privacy-aware secure attribute-based V2V and V2I communication architecture and model using trusted cloudlets. These cloudlets are setup in wide geographic locations with defined coverage area. Each cloudlet will receive messages from vehicles in its range and forward it to all other vehicles associated with that cloudlet. Vehicles are dynamically assigned to these cloudlets as they move along geographic boundaries based on their GPS coordinates and predicted path. An important benefit of this indirect V2V and V2I communication is the deployment of security policies at edge cloudlets which can restrict or block fake messages, and ensure trustworthiness in communication. Moreover edge cloudlets also enable message anonymization and user privacy, as the receiver cannot detect who is the sender as all messages come through edge infrastructures. These cloudlets can also be used to forward certificate revocation lists (CRLs) to vehicles in the range beside blocking the vehicles themselves. Rogue vehicle list can be dynamically updated at the edges, and messages from a vehicle in the rogue list can be blocked. The proposed architecture and attribute-based policies ensure the important security properties of message integrity, originator authenticity and user privacy concerns in V2V and V2I communication. This MQTT \\cite{mqtt} based approach for messages exchange can be used in addition to DSRC to enable use cases with acceptable latency (discussed in implementation section) without the need for additional hardware cost\\footnote{NHTSA proposed V2V equipment and communication is between \\$341 to \\$350 per vehicle in 2020 \\cite{faq}} and work with familiar technologies such as WiFi, LTE or 5G. This work proposes a formalized communication security model for V2V and V2I called attribute-based intelligent transportation system (AB-ITS). We have implemented our proposed architecture and model using AWS and collected several performance metrics, which reflect the plausibility and efficiency of our proposal.\n\n\nRest of the paper is as follows: Section \\ref{sec_related} discusses related work along with USDOT proposed Security Credential Management System (SCMS). Security and privacy requirements along with the proposed cloudlet supported ITS architecture is given in Section \\ref{sec_arch}. Section \\ref{sec_model} presents formal attribute-based V2V and V2I communication model (AB-ITS). Section \\ref{sec_imp} describes our implementation with real-world use cases using AWS, and discusses performance parameters. Section \\ref{sec_summary} concludes the paper.\n\n\\iffalse\nDepartment's National Highway Traffic Safety Administration (NHTSA) estimates that safety applications enabled V2V could eliminate or mitigate the severity of up to 80\\% of non-impaired crashes, including crashes at intersections or while changing lanes.\nOn the other hand, when such a vehicle connects with road infrastructure, the exchange of information such as information of road signals, weather condition, and nearby traffic condition takes place.\n\nIt would enable vehicles to transmit their location, speed, direction and other information 10 times per second. That lets cars detect, for example, when another vehicle is about to run a red light or coming around a blind turn in time to prevent a crash.\n\\fi\n\n\n\n\n\n\\section{Discussion and Limitations}\\label{sec-limit}\n\\section{ Cloudlets Enabled Attribute Based V2V and V2I Communication}\\label{sec_model}\nEdge cloudlets supported V2V and V2I communication has many advantages, as discussed in previous section. These cloudlets can support attributes based fine-grained policies based on which communication decisions can be made. These attributes offer flexibility and take into account different environmental factors along with dynamic policies based on administrator needs. Further, individual users are also allowed to set their own privacy preferences, to decide on what and from whom messages are allowed to receive. In this section, we formally define our proposed cloudlets supported attributes based intelligent transportation system model, which we refer to as AB-ITS.\n\n\\subsection{AB-ITS Communication Model}\n\\input{Figures\/model}\nThe conceptual AB-ITS communication model is shown in Figure \\ref{fig_model} and formal definitions elaborated in Table \\ref{tab 1}. The model has following components: Vehicles (V), Transportation Infrastructure Devices (I), Users (U), Sources (S), Trusted Cloudlets (TC), Target Vehicles ({$\\mathrm{V_T}$}), Operations (OP), Authorization Policies (POL), and Attributes (ATT).\n\n\\noindent\n\\textbf{Sources (S) :} A source initiates operations on cloudlets (discussed below) in the system. A source can be from a set of vehicles (V), transportation infrastructure (I) or an administrator user (U). For instance, in case of V2V communication, a source is a vehicle which wants to send messages to other vehicles in its vicinity. Similarly, law enforcement and city administration can initiate theft and accident alerts in a particular area via cloudlets, which are forwarded to all vehicles associated with cloudlet.\n\n\\noindent\n\\textbf{Trusted Cloudlets (TC) :} Cloudlets are introduced, which are trusted edge infrastructures set up across locations and facilitate secure V2V and V2I communication. These cloudlets have a limited geographic range and all vehicles in that range get associated with one or more TCs automatically based on their moving location coordinates. Any communication between vehicles and other entities including transportation infrastructures (or RSUs) is done via TC, which checks security policies to forward or block the messages sent by different sources. Also TCs have attributes which are propagated to associated vehicles and can also help setting alerts and warnings based on attribute values. For instance, when a vehicle enters forest and gets associated with the cloudlet, it can automatically inherit a wildlife area attribute ON from TC.\n\n\\noindent\n\\textbf{Target Vehicles ($\\mathbf{V_T}$) :} These vehicles are subset of total vehicles (V) in the transportation system and are potential receiver of messages sent by a source. Both target vehicles and source must be associated with same TC to enable V2V and V2I communication.\n\n\\noindent\n\\textbf{Operations (OP) :} Operations are actions which are performed by source on TC. TC also execute operations against associated vehicles and infrastructures. For example, a source initiating a join operation to get associated with a TC, or trying to send a message to vehicles via TC. Also, TC forwarding a message sent by sources to its member vehicles is another example of operations in ITS. These also include administrative actions performed by a user including updating, deleting or adding attribute values for an attribute or rogue vehicles list in TC.\n\n\\noindent\n\\textbf{Authorization Policies (POL) and Attributes (ATT) :} Sources, TCs, vehicles and other relevant ITS entities can have personal defined individual policies along with system wide authorization policies needed for the overall secure functioning of the ecosystem. Vehicles owners can set individual privacy preferences which enable them to allow or disallow any particular private information from being shared with a third party remotely. Similarly, city traffic department may set its own rules when to trigger an alert or warnings to vehicles in a sensitive or accident prone area. Administrative policies are also needed to authorize a legitimate user to change attributes, send notifications to TCs or update rogue vehicles list. Entities like vehicles and sources also have individual characteristics, called attributes, which are used to make authorization and communication decisions in ITS. For a vehicle, sample attributes can be: vehicle ID, speed, heading angle, brake, vehicle size, vehicle type or preferred notifications. Vehicles and infrastructure can also inherit attributes from their associated TCs, which can have common location wide attributes like speed limit, road work ahead or blind turn.\n\nBoth attributes and policies are dynamic which can be changed by administrators or vehicle owners based on system needs and personal preferences. The attributes of vehicles like location, speed or heading angle are continuously changing, but other attributes like vehicle size remain static. Policies are also dynamic in nature, as reflected in use-case implementation in the next section, where we defined a security policy with a list of black-listed rogue vehicles which are notified to law enforcement when detected by TCs. This list is dynamic in nature and is continuously updated by administrators, demonstrating how dynamic policies are used and enforced in ITS.\nIt must be noted that in a session the proposed model assumes a static set of policies and attributes which are used to make V2V and V2I communication decision. All relevant polices including system defined and user preferences are evaluated to make the final communication decision.\n\\input{Tables\/table_model-new}\n\nIn our proposed model, TCs evaluate security policies and ensure that un-trusted or fake messages are not forwarded to associated vehicles in its geographic coverage boundary. These connected vehicles must initiate association with TCs pro-actively based on their predicted path, and once they get into the range of the TCs, vehicles become the member of TC. Such communication with TCs can be done using encrypted and secure cellular or WiFi technologies with no added equipment cost. It should be noted that our model complements the proposed DSRC based direct V2V and V2I communication, and can be used to assist in situations where the authenticity and integrity of messages is much needed. Our use-cases in the next section will highlight the real world enforcement and use of AB-ITS model.\n\n\n\\subsection{Formal Definitions}\n\nTable \\ref{tab 1} elaborates the formal AB-ITS communication model definitions, which comprise of vehicles (V), transportation infrastructure devices (I), administrative users (U) and edge cloudlets (TC). A source in S initiating an operation op $\\in$ OP can be from a set of vehicles, transportation infrastructures or users, whereas target vehicles {$\\mathrm{V_T}$}~is a subset of total vehicles in the entire transportation system ({$\\mathrm{V_T}$}~$\\subseteq$ V). Attributes are functions defined for source and edge cloudlets, where functions can be set or atomic valued (stated by attType) and are assigned values from Range(att) for each att $\\in$ ATT. The atomic valued attributes are assigned single value including null (denoted as $\\bot$) whereas set valued attribute can have a subset of values assigned from power set of the range of attribute function. Some attributes are also defined system wide, which reflect the state of entire transportation system (like level of threat or city traffic) and are set by administrators. Authorization security policies are defined for individual sources and TCs, which are either stated based on personal privacy preferences or are enforced system wide as defined by central administrators. For example, a driver may not want to receive marketing commercials on dashboard, so she can set such personal preference as choosing the desired policy, whereas police can define a policy with a list of black-listed cars and blocking communication from them.\n\nSource and target vehicles are dynamically assigned to one or many trusted edge cloudlets based on their current GPS coordinates and predicted path as defined by associated\\_cloudlets function. The association with edge cloudlets is fixed for transportation infrastructures or administrators which are assigned at the time of system deployment whereas for vehicles it keeps on changing as the vehicles move. Each cloudlet has defined geographic coverage area and when vehicles enter the area, they get associated with the cloudlet. A vehicle may be associated with multiple cloudlets in areas where coverage areas are overlapping, thereby, a vehicle is always associated to at least one cloudlet at all times. These cloudlets mediate the V2V and V2I communication by enforcing security policies, stop fake messages and ensure privacy, as discussed later in the model definitions. Further, sources (including vehicles) inherit attributes from their associated cloudlets, which helps in administration and propagation of common attributes to all associated entities with single administrative action. For instance, at a location where flash flood warning is issued, the edge cloudlet installed there will set attribute flash-flood = ON for all its associated vehicles when they become members of that cloudlet. In case of set valued attribute function, the effective attribute values for att $\\in$ ATT of source (defined as $\\mathrm{\\ensuremath{\\mathrm{effS}}_{att}}$), including target vehicles, is the union of direct values assigned to the source for attribute att and the values assigned to att for each associated cloudlets. However, in case of atomic valued attribute, it is necessary to define which attribute values take precedence when multiple edge clouds are associated. In our model, we propose that most recently connected cloudlet with non null value for the attribute will be inherited by the associated source or vehicles.\\footnote{There are other approaches also to deal with atomic value inheritance, but for moving vehicles which are dynamically assigned to new cloudlets, we believe this approach is the most appropriate and relevant.} For example, the speed-limit attribute of most recently associated cloudlet will be populated for all member vehicles, and as the vehicle moves, this value is inherited from next associated edge cloudlet and so on. This inheritance in atomic values attribute only takes place when edge cloudlets have non null values, whereby with all associated cloudlets having null values, the direct attribute value of the source holds as its effective value also.\n\nAuthorization functions are parameterized propositional logic formulae defined to represent access control security policies stated in the policy language defined in Table \\ref{tab 1}. The function $\\mathrm{Auth_{op}}$(s:S, tc:TC) specify conditions under which source s (including vehicles) can perform an operation op $\\in$ via cloudlet tc $\\in$ TC. These boolean authorization functions are evaluated substituting actual arguments for formal parameters along with direct and effective attributes values of actual arguments. Similar syntax and policy language can be defined for other set of policies including personal vehicle specific policies or system wide policies with attributes of relevant entities substituted in authorization requests evaluation.\nAuthorization decision to allow {$\\mathrm{s^\\prime}$}~$\\in$ S to perform an operation op $\\in$ OP on {$\\mathrm{tc^\\prime}$}~$\\in$ TC is determined when the authorization function is evaluated with the actual arguments ({$\\mathrm{s^\\prime}$}~$\\in$ S, {$\\mathrm{tc^\\prime}$}~$\\in$ TC) to be True.\nSimilarly, the decision for operation op from {$\\mathrm{tc^\\prime}$}~$\\in$ TC to {$\\mathrm{v^\\prime}$} $\\in$ {$\\mathrm{V_T}$}~is made by calling the relevant authorization function with actual parameters.\n\n\nAs discussed in authorization property, the model has defined two primitive operations, `send' and `forward' relevant for V2V and V2I communication. A source uses `send' operation (defined as $\\mathrm{Auth_{send}}$({$\\mathrm{s^\\prime}$} : S, {$\\mathrm{tc^\\prime}$} : TC)) to communicate a `send message' to trusted cloudlet, whereas `forward' operation (defined as $\\mathrm{Auth_{forward}}$({$\\mathrm{tc^\\prime}$} : TC, {$\\mathrm{v^\\prime}$} : {$\\mathrm{V_T}$})) is between trusted cloudlet and target vehicle defining a `forward message'. For allowing, communication from {$\\mathrm{s^\\prime}$} to {$\\mathrm{v^\\prime}$} requires a common {$\\mathrm{tc^\\prime}$}~to which both {$\\mathrm{s^\\prime}$} and {$\\mathrm{v^\\prime}$} are associated and the required authorization functions for send and forward messages i.e $\\mathrm{Auth_{send}}$({$\\mathrm{s^\\prime}$} : S, {$\\mathrm{tc^\\prime}$} : TC) and $\\mathrm{Auth_{forward}}$({$\\mathrm{tc^\\prime}$} : TC, {$\\mathrm{v^\\prime}$} : {$\\mathrm{V_T}$}) as well as the system defined security policies evaluate to True. Additional relevant operations and messages can be similarly defined.\n\nThe proposed AB-ITS model leverages attributes and GPS coordinates of communicating entities to enable and secure V2V and V2I communication. The introduction of trusted cloudlets provide benefits of enforcing security policies at the edge to stop fake messages, enhance user privacy and integrity of messages before forwarded to other target vehicles. These edge cloudlets ensure low latency and near real time communication much needed in most ITS applications without bandwidth issues. It must be noted that the messages shared among source and vehicles are end to end encrypted and can still use the proposed DSRC wireless technology for communication with cloudlet and then to the vehicles. Our model complements the USDOT proposed V2V and V2I architecture functionalities and support applications which need additional message integrity and confidentiality, and can be used as an add on to current ITS peer to peer communication.\n\n\\iffalse\nAuthorization functions are parameterized propositional logic formulae defined to represent access control security policies stated in the policy language as shown in Table \\ref{tab 1}. We have stated syntax for two authorization functions $\\mathrm{Auth_{op}}$(s:S, tc:TC) and $\\mathrm{Auth_{op}}$(tc : TC, v : {$\\mathrm{V_T}$}), which specify conditions under which source s can perform operation op $\\in$ OP on tc, and policies when tc can perform operation op on vehicle v respectively. These boolean authorization functions take formal parameters which are passed when function call is made using the actual arguments. The values of direct and effective attributes of these formal parameters (s $\\in$ S, tc $\\in$ TC, v $\\in$ {$\\mathrm{V_T}$}) are used to evaluate the authorization policy. Similar syntax and policy language can be defined for other set of policies including personal vehicle specific policies or system wide policies with attributes of relevant entities included in authorization requests evaluation. Authorization decision for allowing {$\\mathrm{s^\\prime}$}~$\\in$ S to send or perform an operation op $\\in$ OP on {$\\mathrm{tc^\\prime}$}~$\\in$ TC is determined when the authorization function call is made using the actual parameters ({$\\mathrm{s^\\prime}$}~$\\in$ S, {$\\mathrm{tc^\\prime}$}~$\\in$ TC) and if function returns True, the operation op is allowed. Similarly, the decision for operation op from {$\\mathrm{tc^\\prime}$}~$\\in$ TC to {$\\mathrm{v^\\prime}$} $\\in$ {$\\mathrm{V_T}$}~is made by calling the relevant authorization function with actual parameters. The communication decision for a message from a source {$\\mathrm{s^\\prime}$}~to vehicle {$\\mathrm{v^\\prime}$}~requires a common trusted cloudlet {$\\mathrm{tc^\\prime}$}~to which both {$\\mathrm{s^\\prime}$}~and {$\\mathrm{v^\\prime}$}~are associated, and the required authorization policies including system defined policies are taken into account to make the final ITS V2V and V2I communication decision. The following authorization property must hold true in AB-ITS model:\n\n\nFor allowing, communication and message flow from {$\\mathrm{s^\\prime}$} to {$\\mathrm{v^\\prime}$} requires a common {$\\mathrm{tc^\\prime}$} to which both {$\\mathrm{s^\\prime}$} and {$\\mathrm{v^\\prime}$} are associated and the required atomic authorization functions i.e $\\mathrm{Auth_{send}}$({$\\mathrm{s^\\prime}$} : S, {$\\mathrm{tc^\\prime}$} : TC) and $\\mathrm{Auth_{forward}}$({$\\mathrm{tc^\\prime}$} : TC, {$\\mathrm{v^\\prime}$} : {$\\mathrm{V_T}$}) as well as the system defined security policies must return True.\n\n\nThe proposed AB-ITS model leverages attributes and GPS coordinates of communicating entities to enable and secure V2V and V2I communication. The introduction of trusted cloudlets provide benefits of enforcing security policies at the edge to stop fake messages, enhance user privacy and integrity of messages before forwarded to other target vehicles. These edge cloudlets ensure low latency and near real time communication much needed in most ITS applications without bandwidth issues. It must be noted that the messages shared among source and vehicles are end to end encrypted and can still use the proposed DSRC wireless technology for communication with cloudlet and then to the vehicles. Our model complements the USDOT proposed V2V and V2I architecture functionalities and support applications which need additional message integrity and confidentiality, and can be used as an add on to current ITS peer to peer communication.\n\n\\fi\n\n\n\n\\section{Proposed Cloudlets Supported ITS Architecture}\\label{sec_arch}\nThe current peer to peer V2V and V2I communication as represented in Figure \\ref{fig_v2v} is proposed to use SCMS to ensure secure trusted basic safety messages exchange among entities. However, the vast and complex scale of this PKI based system has user privacy and security concerns which need to be addressed before its deployment. In this section, we will discuss security and privacy requirements of ITS and smart cars ecosystem and highlight how the proposed trusted cloudlets supported communication offers the required security and complements current solutions.\n\n\\subsection{Security and Privacy Requirements}\n\\input{Figures\/architecture}\nIntelligent Transportation System (ITS) involves real time sharing of location and sensitive information about vehicles and passengers, which pose a serious privacy threat and a strong deterrent for its adoption. Dynamic and distributed ITS will enable interaction with random entities on road with no prior trust established, and the information sent from these smart vehicles will be used by on-board applications to provide safety and warning signals, which itself has some inherent security risks. An adversary can compromise a road-side unit or vehicle to send fake information about traffic or accident, which can trigger unnecessary alerts and may distract drivers. Basic safety messages (BSMs) are designed to contain no personal identifiable information (PII) and are attached with a certificate issued by certificate authority in SCMS. However, limited number of certificates and number of messages sent per minute can reveal the identity of a targeted vehicle with advanced computer techniques. Untrackability of vehicles and users is paramount to ensure privacy in ITS. Also, the system must not save personal or individual information and use it as law enforcement or issuing speeding tickets. Anonymity of sender must always be maintained. Over the air messages exchanged among smart entities must have integrity, and authenticity. Security mechanisms to protect smart cars and their critical systems from unauthorized access, control and tampering are important to strengthen intelligent transportation. Integrated approach of DSRC and cellular technologies is needed based on different ITS applications. Cloud and cloudlets supported architectures will provide resiliency and reduce system stress. Encrypted and secure data transfer link is the backbone needed from DSRC, cellular LTE or any communication technologies involved in ITS. However, limited bandwidth and latency issues in cloud connectivity needed for certificate updates and revocation needs attention\n\nIn smart city, location based notifications for connected vehicles must allow user to have personal preferences where a user may want weather warning and not parking advertisements on board. Dynamic policies are required, for example, in case of a traffic jam in an area a policy may ask all drivers to follow route A but considering the heavy traffic on route A, the policy may be changed to move traffic to route B or C. This can be implemented at the edge level and triggered by central administrators. In such a case, whether the administrative subject is authorized to change the policy or trigger an alert, also needs security checks.\n\n\n\n\\subsection{How Cloudlets Can Provide Security?}\nFigure \\ref{fig_arch} shows the proposed edge supported architecture for V2V and V2I communication. Trusted edge infrastructures (setup by city administration) will work as a middle man and relay messages to vehicles and other entities inside its geographic range. Instead of peer to peer connection, all vehicles publish to edges, where security policies defined are checked to ensure validity and integrity of the communication, and relevance of messages, before forwarding to other vehicles. A vehicle can be in range of multiple infrastructures, depending on its location. Each vehicle will be dynamically associated with edges as it moves. All participating vehicles and RSUs still need to enroll with a central authority to be part of the system, to ensure that only trusted vehicles are allowed to exchange messages among themselves. Communication technologies used for vehicles to cloudlets can be cellular LTE, WiFi or DSRC. MQTT messaging protocol can be used, as discussed in implementation which will obviate the cost of DSRC equipments needed in smart cars. The proposed architecture is implemented in addition to V2V and V2I direct communication and is supported in NPRM \\cite{nhtsa-6} documents which recommend both DSRC and secondary communication for ITS.\n\nTrusted cloudlets installed in wide geographic area offer the needed fog infrastructure functionality required in an IoT environment. They can address security concerns by deploying and enforcing security policies to ensure trusted communication among smart entities on the road.\nThis proposed architecture offers an alternate edge supported V2V and V2I communication with minimal message latency and in permissible time limits \\cite{Xu:2004:VSM:1023875.1023879,articleV2V}.\nA vehicle sending and receiving BSM or other messages, must be associated with an edge infrastructure, which will enforce policies, sanitize messages, prevent fake messages dissemination and offer administrative advantages. Each cloudlet will have a geographic range and all the vehicles within it will get associated with the edge automatically. Since the range of edge is within a restricted limited area, it also ensures location sensitivity of messages exchanged, as vehicles communicating messages must be associated to a common edge cloudlet. Message anonymity and sanitization can be done, since\nthe messages sent by a vehicle are relayed via the edge cloudlet without direct peer to peer communication, which will have less security and privacy implications.\n\nFurther, using cloudlets offers administrative benefits as single notification from edge infrastructure will trigger alerts for all the vehicles which are connected to it in a geographic range. If an agency or a police vehicle wants to send alerts, instead of sending to each individual vehicle, they can send it to a trusted cloudlet, which after checking the policies to ensure the sender is allowed to generate such requests, forwards or stops the message.\nAlso, entities present in a particular area have certain characteristics (for example, stop sign warning, speed limits, deer-threat, flash flood warnings etc.) in common, which can be inherited by getting dynamically associated to edge infrastructures, without the need to generate messages 10 times per second \\cite{nhtsa-3} to get this information from other vehicles or RSUs saving network bandwidth.\n\nIt is also possible to limit the messages to a specific set of vehicles, for example, in case of a kidnapped child warning, messages can be sent to nearby edge infrastructures and then to only police vehicles in the area, and not to the common public using security policies defined at the cloudlet. Edge infrastructure can also have the capacity to filter unwanted and incorrect messages from the vehicles and infrastructure using a majority rule policy. For example, if an adversary is sending accident message (either deliberately or a malfunction sensor on vehicle) to subvert the traffic whereas other vehicles notify no accident and clear traffic messages, installed trusted edge will have the intelligence and policy to filter such fake messages and forward the correct information to its associated vehicles.\nThis will not be possible in peer to peer V2X (vehicle to anything) architecture immediately, until certificate revocations (by a central authority) are propagated to individual vehicle, which may take time and also require internet connectivity which cannot be guaranteed all times in terrains where the vehicle is moving. Also, instead of sending CRLs to each vehicle, only edge servers can be sent with list of revoked certificates and based on the information, edge can decide if the messages sent by vehicle should be forwarded or not.\n\nFurther, if an adversary is detected by an edge with fake or wrong messages, policies can be defined to inform appropriate agencies and law enforcement in the area where such malicious behaviour is detected. It is also possible to have different levels of alerts based on the degree of trust and who is the sender. Law enforcement initiating a bomb threat in the vicinity will be treated as major threat and edge infrastructure states it as code red alert, with immediate rerouting and emergency exit directions.\n\\iffalse\nV2V-- Forward Collision Warning, Emergency Electronic Brake Light, Blind Spot\/Lane Change Warning, Do Not Pass Warning Intersection, Movement Assist Left Turn Assist\n\nv2I-- Curve Speed Warning Red Light Violation Warning\n\\fi\n\n\n\n\n\n\n\n\\section{Related Work}\\label{sec_related}\nConnected and smart vehicle applications need wireless exchange of V2X messages among unknown moving vehicles, RSUs and pedestrians. The proposed intelligent transportation system (ITS) for future cities has underlying technologies, security concerns and proposed solutions, which we briefly review in this section.\n\n\\subsection{Security Credentials Management System}\nUnited States Department of Transportation (USDOT) has suggested a PKI-based security infrastructure system, called Security Credentials Management System or SCMS \\cite{its-scms2, its-scms1}, to ensure trusted V2V and V2I communication among random moving entities. Authorized participating vehicles use digital certificates issued by SCMS to validate and authenticate basic safety messages (BSMs), by attaching these certificates with each message to ensure integrity, confidentiality and privacy of the communication. Vehicles need initial enrollment into SCMS to obtain security certificates from trusted certificate authorities (CA). Each BSM will include vehicle related information digitally signed using private key corresponding to the digital certificate attached with BSM. Different certificate types are used including enrollment, pseudonym and identification for vehicle and enrollment applications for RSUs. Certificates can be cancelled for potential adversaries or reported misbehaving vehicles by CAs by disseminating certificate revocation lists (CRLs). USDOT and NHTSA claim \\cite{us-dot-1} that BSMs will exchange anonymized information and no personal identifiable data will be shared with other entities. SCMS is considered as a central system to be trusted by entities participating to revolutionize transportation\n\nHowever, there are some challenges \\cite{scms-issues,scms-issues1} that need to be addressed before the system is deployed. Each vehicle will receive 20 certificates weekly to sign the BSMs \\cite{nhtsa-5}, which will rotate every 5 minutes. Therefore, a vehicle will use a new set of 20 certificates every 100 minutes. In such a scenario a computer can analyse all the certificates a vehicle used in a day and then use these certificates to track it for a week. Although, PKI based SCMS system ensures who signed the certificate, it is difficult to prove how correct or true the information sent from the vehicle is. A malfunctioning device in the vehicle can result in false BSMs exchanged even though the sender is trusted. Further, the proposed SCMS system will be largest and complex ever built producing 265B to 800B certs\/year depending on weekly rate supporting 17M vehicles\/year \\cite{scms-issues1}. The revocation of certificates for bad actors would result in pushing CRLs to all enrolled vehicles, which will be time and bandwidth consuming.\n\n\\subsection{Relevant Background and Technologies}\n\nSeveral general IoT architectures \\cite{atzori2010internet,gubbi2013internet,alshehri2016access} have been proposed with different middleware layers in multi-layer stack representing physical objects, communication or service layer, cloud and end-user applications. Gupta and Sandhu proposed \\cite{gupta2018authorization} enhanced access control oriented architecture (E-ACO) particularly relevant to smart cars and intelligent transportation. The work introduced clustered objects (smart objects with multiple sensors like cars) as component of object layer which interact with other objects similar to V2V and V2I communication. As shown in Figure \\ref{fig_eaco}, E-ACO architecture has four layers: \\textbf{Object Layer} at the bottom representing physical objects including connected cars, vehicles and RSUs. \\textbf{Virtual Object Layer} maintains cyber entity (like an AWS shadow stored as JSON) of each physical object which is imperative in a moving and dynamic ecosystem like smart cars, where the connectivity of a vehicle is not continuously guaranteed. With virtual objects, when direct communication with physical object is not possible, its virtual entity maintains last reported and desired state information. Further, it resolves the issues of heterogeneity as objects support different communication technologies. Using virtual objects, physical entities communicate with corresponding virtual objects where messages are exchanged with virtual entities of other object which is then passed to actual physical object. \\textbf{Cloud Services and Application Layer} together harness data sent by physical objects and use it to extrapolate value, analytics and provide end user cloud supported applications.\n\n\\input{Figures\/v2v}\n\nSmart cars security incidents including Jeep \\cite{jeep} and Tesla Model X \\cite{tesla} hacks have demonstrated how engine was stopped\nand steering remotely controlled exhibiting cyber threats. Security and privacy issues in smart cars and ITS are serious concerns where several federal agencies are working along with industry partners to ``fully'' proof the system before final deployment and use by common public. European Union Agency for Network and Information Security (ENISA) \\cite{enisa} has studied vulnerable assets in smart cars with related threat and risks, and proposed some prevention approaches with recommendations. Cooperative Intelligent Transport Systems (C-ITS) \\cite{c-its1,c-its2} also highlighted the need of data communication integrity and authenticity in V2V and V2I, and proposed PKI based trust model using pseudonym certificates. NHTSA report \\cite{nhtsa-3} has thoroughly explored the technical, legal and policy related issues pertinent to V2V communication and studied technological solutions for safety and privacy issues. US Government Accountability Office (GAO) \\cite{gao} has also discussed security risks and potential attack surfaces in smart vehicles, and proposed solutions to prevent cyber threats.\n\nAttribute based access control \\cite{hu2015attribute,jin2012unified,gupta2018attribute} provides fine grained authorization capabilities most appropriate in dynamic and distributed systems similar to ITS. Recently dynamic groups and ABAC model \\cite{Gupta:2019:DGA:3292006.3300048,gupta2019secure} was proposed for smart cars ecosystem which caters to mobile needs of vehicles. However the model is more suitable to cloud assisted applications and a real time V2V and V2I edge supported model is still missing. Role based access controls \\cite{sandhu1996role,ferraiolo2001proposed} were designed particularly for enterprise applications with a limited set of roles and administrators assigning roles to users. Similar concept does not seem to fit dynamic and random unknown IoT smart cars setting where devices and vehicles are in different administrative domains spread across geographic area.\n\n\n\n\n\n\\section{Summary}\\label{sec_summary}\n\nThis research work proposes a cloudlet assisted secure V2V and V2I communication in intelligent transportation system, which ensures trusted and reliable messages exchange among moving entities on road. We introduce the novel notion of dynamic edge associations in which the smart entities get connected to different pre-installed cloudlets on road, which help them relay the basic safety messages and perform the needed filtering and reduces privacy concerns of the users. These cloudlets can anonymize the messages, ensure trustworthiness and ensure their relevance to entities which receive them. We also present the formal model which specifies attributes based polices for V2V and V2I communication. Several use-cases of ITS have been discussed along with implementation in Amazon Web Services (AWS). Performance has been evaluated against time taken to evaluate the polices in cloudlets and the total trip time from the moment message is generated till it gets received and relayed by the cloudlets. In future work we would incorporate additional privacy preserving approaches wherein the exact location GPS coordinates are not required to be shared with cloud. The work can be complemented using homomorphic encryption or other similar approaches which will further mitigate privacy concerns of the users.\n\\iffalse\n---To ADD---\n\n\n\nPERFORMANCE---\n\nAs more cars are in the GG, it do not impacr performace as all assigned to same topic (Need conforamtion)\n\\fi\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzgiwl b/data_all_eng_slimpj/shuffled/split2/finalzzgiwl new file mode 100644 index 0000000000000000000000000000000000000000..ba7174f414edf3bf5ea8d57be2223f14fc9cbfad --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzgiwl @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{Sec1}\nA scalar-tensor theory of gravity with higher-order derivative\nterms in the action, but with no more than second derivatives in\nthe field equations, has first been found in\nRef.~\\cite{Horndeski}. Later, a subclass of the scalar part of\nthis theory has been discussed in a different context\nin~\\cite{Fairlie}. In the framework of the\nDvali-Gabadadze-Poratti (DGP) brane model~\\cite{Dvali:2000hr},\nthe cubic scalar higher derivative action appears naturally in a\nspecific decoupling limit. More recently, a generalization of the\nDGP decoupling limit action has been presented in\n\\cite{Nicolis:2008in}, where this model was dubbed ``Galileon'',\nbecause the considered action is invariant under the galilean\ntransformation of the field (adding a constant to the field or a\nconstant vector to its gradient). The original Galileon model has\nbeen later covariantized to include a dynamical metric\n\\cite{Deffayet:2009wt}, and then extended and generalized in\nseveral papers, see, e.g. \\cite{Deffayet:2009mn,Deffayet:2011gz}.\n\nThe fact that the field equations are no more than second order,\nin spite of the higher-order action, guarantees the absence of\nany Ostrogradsky ghost, i.e., an extra degree of freedom with\nnegative kinetic energy. Another interesting feature of the\nGalileon model is that it generically possesses a Vainshtein\nscreening mechanism. This last property can be understood as a\nparticular case of ``k-mouflage'', i.e., of kinetic\nscreening~\\cite{Babichev:2009ee}. The Vainshtein mechanism hides\nthe propagating scalar degree of freedom, thus providing a way to\npass local gravity tests, even if the scalar field is directly\ncoupled to matter. The local effects of the Vainshtein screening\nof the Galileon has been studied in a number of\nworks~\\cite{galileons}.\n\nUsually, when studying local effects of the Galileon, the\ncosmological evolution of the scalar field is not taken into\naccount, because of the smallness of the time derivative of the\nfield compared to the spacial gradients. However, such a\ndisregard of the Galileon's time evolution may be misleading. For\nexample, it was shown in Ref.~\\cite{Babichev:2011iz} that in\nspite of the Vainshtein mechanism, which operates locally, this\ntime evolution still poses severe constraints on many interesting\nscalar-tensor theories possessing a shift symmetry and a\nmatter-scalar coupling.\n\nIn this work, we study the behavior of the covariant cubic\nGalileon model (the covariant version of the decoupling limit of\nthe DGP model) in presence of a spherically symmetric matter\nsource, and possible time evolution of the scalar field imposed\nby cosmology. We investigate in detail two distinct cases: (i)~a\nspherically symmetric matter source is embedded in asymptotically\nMinkowski spacetime, and the Galileon field does not depend on\ntime; (ii)~asymptotically de Sitter Universe with time-dependent\nGalileon field. To realize the latter regime, we adopt the\ncosmological solution of the Kinetically Gravity Braiding\nmodel~\\cite{Deffayet:2010qz}, for which the acceleration of the\nUniverse is driven by the time evolving Galileon field itself. We\nfind non-homogeneous solutions in the test scalar field\napproximation, and analyze the stability of perturbations on top\nof these solutions.\n\nThe paper is organized as follows. In Sec.~\\ref{Sec2}, we\nintroduce the action and field equations of the cubic covariant\nGalileon. In Sec.~\\ref{Sec3}, we derive a solution for a scalar\nfield having a cosmological time evolution, in an arbitrary\nspherically symmetric static spacetime, in the test field\napproximation. Perturbations are investigated in Sec.~\\ref{Sec4},\nwhere we diagonalize the kinetic matrix, demixing the spin-0 and\nspin-2 propagating modes. The solution in asymptotically\nMinkowski spacetime, in presence of a matter source with no time\nevolution of the scalar field, is studied in Sec.~\\ref{Sec5},\nwhere we find the effective metric for the helicity-0 propagating\nmode, and the conditions for stability of the solution. In\nSec.~\\ref{Sec6}, we perform a similar analysis for asymptotically\nde Sitter spacetime, in presence of a cosmological evolution of\nthe scalar field. Scalar perturbations inside a matter source are\ninvestigated in Sec.~\\ref{Sec7}. Besides the analyses of the\nCauchy problem and the positivity of energy, we also exhibit a\nlarge friction term, which is caused by the\ncosmologically-imposed time evolution of the Galileon within\nmatter. Our conclusions are presented in Sec.~\\ref{Concl}.\n\n\\section{Action and field equation}\n\\label{Sec2}\nWe consider the quadratic plus cubic Galileon action in curved\nspacetime,\n\\begin{equation}\nS = 2 M_P^2 \\int{d^4 x\\, \\sqrt{-g}\\left\\{\\frac{R}{4}\n- \\frac{k_2}{2}(\\partial_\\mu\\varphi)^2\n-\\frac{k_3}{2M^2} \\Box\\varphi (\\partial_\\mu\\varphi)^2\\right\\}}\n+S_\\text{matter}\\left[\\psi_\\text{matter};\n\\tilde g_{\\mu\\nu}\\right],\n\\label{Eq1}\n\\end{equation}\nwhere the scalar field $\\varphi$ is chosen to be dimensionless,\nwhere the reduced Planck mass\\footnote{Throughout this paper, we\nchoose natural units such that $\\hbar = c = 1$.} $M_P \\equiv\n(8\\pi G)^{-1\/2}$ should not be confused with the mass scale $M$\nentering the Galileon's cubic kinetic term, and where $g$ and $R$\ndenote respectively the determinant and the scalar curvature of\nthe metric $g_{\\mu\\nu}$ (of signature $-+++$) used to define\ncovariant derivatives and to contract indices. All matter fields,\nglobally denoted as $\\psi_\\text{matter}$, are assumed to be\nminimally coupled to a physical (``Jordan'') metric $\\tilde\ng_{\\mu\\nu} \\equiv e^{2\\alpha\\varphi}g_{\\mu\\nu}$, where $\\alpha$\ndenotes a dimensionless matter-scalar coupling constant. The\nconstants $k_2$ and $k_3$, dimensionless too, could \\textit{a\npriori} be reabsorbed in the definitions of the scalar field\n$\\varphi$ and of the mass scale $M$ (while changing\nsimultaneously the value of $\\alpha$:\n$\\varphi^\\text{new}= |k_2|^{1\/2} \\varphi$,\n$M^\\text{new}= M |k_2|^{3\/4}\/|k_3|^{1\/2}$,\n$\\alpha^\\text{new}= \\alpha\/|k_2|^{1\/2}$). In other words, the\nmodel (\\ref{Eq1}) actually depends on only two independent\nparameters, for instance $M$ and $\\alpha$. However, the redundant\nfactors $k_2$ and $k_3$ will be useful in the following to keep\ntrack of the origin of the different terms, and above all to\nchange easily the signs of these two kinetic terms. Their\nnumerical factors are chosen to simplify the following results\ninvolving $k_3$, and so that $k_2 = 1$, $k_3 = 0$ defines a\ncanonically normalized positive-energy spin-0 degree of freedom.\nAnother interest of these coefficients is that any normalization\nconvention can easily be recovered, for instance $k_2 =\n\\frac{1}{2}$ corresponding to the relative weight of the\nEinstein-Hilbert action and the scalar-field standard\nkinetic term in many cosmological papers.\n\nWhen $k_3 = 0$, i.e., in standard scalar-tensor theories of\ngravity \\cite{Jordan,Willbook,Damour:1992we},\n$g_{\\mu\\nu}$ is called the ``Einstein metric'' and its\nfluctuations describe a spin-2 degree of freedom. As we will see\nin Sec.~\\ref{Sec4}, this is no longer the case when $k_3\\neq 0$\nbecause of the metric-scalar derivative coupling entering the\nGalileon's cubic kinetic term. The metric $g_{\\mu\\nu}$ is thus\nneither the Jordan ($\\tilde g_{\\mu\\nu}$) nor the Einstein one,\nand it should just be considered as the variable we choose to\nwrite the field equations as simply as possible.\n\nWe shall study a \\textit{test} scalar field in the background\nmetric generated by a spherical body, assuming that its\nbackreaction on this metric is negligible, which is always the\ncase for a small enough matter-scalar coupling\nconstant\\footnote{Anticipating on definition (\\ref{Eq10}) below,\nwe are actually talking here about the \\textit{effective}\nmatter-scalar coupling constant $\\alpha_\\text{eff}$, which takes\ninto account the induced coupling in presence of a cosmological\ntime evolution of the scalar field.}. We shall also check\n\\textit{a posteriori} in which conditions this backreaction is\nindeed negligible. It will thus suffice to focus first on the\nscalar field equation, which can be written as\n\\begin{equation}\n\\nabla_\\mu J^\\mu = -\\alpha T,\n\\label{Eq2}\n\\end{equation}\nwhere $J^\\mu \\equiv -(1\/\\sqrt{-g})\\delta S\/\\delta\n\\partial_\\mu\\varphi$ is the scalar field's current, and $T$\ndenotes the trace of $T^{\\mu\\nu} \\equiv (2\/\\sqrt{-g}) (\\delta\nS_\\text{matter}\/\\delta g_{\\mu\\nu})$, related to the physical\n(Jordan-frame) matter energy-momentum tensor\n$\\tilde T^{\\mu\\nu} \\equiv (2\/\\sqrt{-\\tilde g})\n(\\delta S_\\text{matter}\/\\delta \\tilde g_{\\mu\\nu})$ by\n$T^{\\mu\\nu} = e^{6\\alpha\\varphi}\\, \\tilde T^{\\mu\\nu}\n\\Rightarrow T = e^{4\\alpha \\varphi}\\, \\tilde T$. The\ncurrent $J^\\mu$ reads explicitly\n\\begin{equation}\n\\frac{1}{M_P^2} J^\\mu = 2 k_2 \\partial^\\mu\\varphi\n+ 2 \\frac{k_3}{M^2} \\Box\\varphi \\partial^\\mu\\varphi\n- \\frac{k_3}{M^2} \\nabla^\\mu\n\\left((\\partial_\\lambda\\varphi)^2\\right),\n\\label{Eq3}\n\\end{equation}\nand Eq.~(\\ref{Eq2}) takes thus the full form\n\\begin{equation}\nk_2\\Box\\varphi + \\frac{k_3}{M^2}\n\\left\\{(\\Box\\varphi)^2-(\\nabla_\\mu\\partial_\\nu\\varphi)^2\n-R^{\\mu\\nu}\\partial_\\mu\\varphi\\partial_\\nu\\varphi\\right\\}\n= -\\frac{\\alpha}{2M_P^2} T,\n\\label{Eq4}\n\\end{equation}\nwhere the Ricci tensor $R^{\\mu\\nu}$ enters because of a\ndifference of third-order covariant derivatives of the scalar\nfield:\n$[\\nabla_\\mu\\nabla_\\nu-\\nabla_\\nu\\nabla_\\mu]\\nabla^\\nu\\varphi =\n-R_\\mu^\\nu \\partial_\\nu \\varphi$. Note that this curvature tensor\ninvolves second derivatives of the metric, therefore\nEq.~(\\ref{Eq4}) actually mixes spin-0 and spin-2 degrees of\nfreedom. We will diagonalize them when studying perturbations in\nSec.~\\ref{Sec4}, but to derive the spherically-symmetric solution\nfor a test scalar field, it suffices to replace this Ricci tensor\nby its vacuum value $R^{\\mu\\nu} = 0$ outside the central massive\nbody.\n\n\\section{Background solution}\n\\label{Sec3}\nA static and spherically symmetric background metric reads in\nSchwarzschild coordinates\n\\begin{equation}\nds^2 = - e^{\\nu(r)} dt^2 + e^{\\lambda(r)}dr^2 + r^2 d\\Omega^2,\n\\label{Eq5}\n\\end{equation}\nwhere $\\lambda(r)$ and $\\nu(r)$ are two functions of the radial\ncoordinate. The exterior (vacuum) solution is well known to read\n$e^\\nu = e^{-\\lambda} = 1-r_S\/r$ in an asymptotically flat\nspacetime, where $r_S \\equiv 2Gm$ denotes the Schwarzschild\nradius of the central body of mass $m$, but we will also consider\nthe Schwarzschild-de Sitter solution in Sec.~\\ref{Sec6} below.\nWe look for a test scalar field solution of Eq.~(\\ref{Eq2}) or\n(\\ref{Eq4}), assuming that the cosmological evolution imposes a\nlinear time dependence, i.e., that $\\ddot \\varphi = 0$ (where a\ndot denotes a time derivative). As shown in \\cite{Deffayet:2010qz},\nsuch a linear time-evolution is a cosmological attractor. Let us\nlook for a solution of the form\n\\begin{equation}\n\\varphi = \\phi(r) + \\dot\\varphi_c t +\\varphi_0,\n\\label{Eq6}\n\\end{equation}\nwhere $\\dot\\varphi_c$ and $\\varphi_0$ are assumed to be\nconstants. The ansatz (\\ref{Eq6}) has been used in a similar\ncontext for the study of the Galileon accretion\n\\cite{Babichev:2010kj} and the effects of cosmologically evolving\nscalar field on the variation of the Newton constant\n\\cite{Babichev:2011iz}. Actually, since $\\varphi' = \\phi'$ (where\na prime denotes a radial derivative), the notation $\\phi$ will\nnot be useful in the following, and we can thus write\nEq.~(\\ref{Eq2}) in terms of $\\varphi'$. With the ansatz\n(\\ref{Eq6}), this field equation reduces in vacuum to a mere\n$\\partial_r \\left[r^2 e^{(\\lambda+\\nu)\/2}J^r\\right] = 0$, and it\ncan be integrated once as $e^{(\\lambda+\\nu)\/2} J^r = \\alpha M_P^2\nr_S\/r^2$, the constant of integration being imposed by the same\nEq.~(\\ref{Eq2}) within matter.\\footnote{\\label{foot2}Actually,\nthis constant of integration is slightly modified by self-gravity\neffects within the body, because the scalar field is sourced by\nthe \\textit{trace} of the matter energy-momentum tensor, with a\ndifferent pressure dependence than in Einstein's equations. We\nshould thus write rigorously, like in Brans-Dicke theory\n\\cite{Willbook,Damour:1992we},\n$e^{(\\lambda+\\nu)\/2} J^r = \\alpha (1-2s) M_P^2\nr_S\/r^2$, where $s \\equiv -\\partial\\ln m\/ \\partial\\ln G \\sim\n|E_\\text{grav}|\/m \\sim r_S\/r_\\text{body}$.} In terms of\n$\\varphi'$, this reads\n\\begin{equation}\n\\frac{2 k_3}{M^2r}\\left(1+\\frac{\\nu'r}{4}\\right)e^{-\\lambda}\n\\varphi'^2 + k_2 \\varphi'\n-\\frac{e^{(\\lambda-\\nu)\/2}}{2}\\left(\n\\frac{\\alpha r_S}{r^2} + \\frac{k_3}{M^2}\\,\\dot\\varphi_c^2\\,\n\\frac{\\nu'}{e^{(\\lambda+\\nu)\/2}} \\right) = 0,\n\\label{Eq7}\n\\end{equation}\nwhose two solutions are\n\\begin{equation}\n\\varphi' = -\\frac{k_2 M^2 e^\\lambda r}{4 k_3\\left(1\n+\\frac{\\nu' r}{4}\\right)}\n\\left[1\\pm\\sqrt{1+\\frac{4 k_3 r_S}{k_2^2\nM^2 r^3 e^{(\\lambda+\\nu)\/2}}\n\\left(1+\\frac{\\nu' r}{4}\\right)\n\\left(\\alpha + \\frac{k_3}{M^2}\\,\\dot\\varphi_c^2\\,\n\\frac{\\nu' r^2}{r_S e^{(\\lambda+\\nu)\/2}}\\right)}\\right].\n\\label{Eq8}\n\\end{equation}\nNote that for a test scalar field, whose backreaction on the\nmetric is negligible, these expressions (\\ref{Eq8}) are exact,\ni.e., correct to any post-Newtonian order (while taking into\naccount the slight numerical change of $\\alpha \\rightarrow \\alpha\n(1-2s)$ due to self-gravity, as underlined in\nfootnote~\\ref{foot2}). Of course, for the exterior Schwarzschild\nsolution, they can be simplified because $e^{\\lambda+\\nu} = 1$.\nMoreover, for $r \\gg r_S$, one may expand $\\lambda$ and $\\nu$ in\npowers of $r_S\/r$, and get\n\\begin{equation}\n\\varphi' = -\\frac{k_2 M^2 r}{4 k_3}\n\\left(1\\pm\\sqrt{1+\\frac{4 k_3 r_S}{k_2^2 M^2 r^3}\\,\n\\alpha_\\text{eff}}\\,\\right)\\times\n\\left[1+\\mathcal{O}\\left(\\frac{r_S}{r}\\right)\\right],\n\\label{Eq9}\n\\end{equation}\nwhere\n\\begin{equation}\n\\alpha_\\text{eff} \\equiv \\alpha\n+ \\frac{k_3 \\dot\\varphi_c^2}{M^2}\n\\label{Eq10}\n\\end{equation}\nis an effective matter-scalar coupling constant, modifying the\nbare one, $\\alpha$, because of the self-interaction of the scalar\nfield with its own cosmologically-imposed time evolution\n$\\dot\\varphi_c$. Let us underline that even if one assumes\n$\\alpha = 0$ in action (\\ref{Eq1}), i.e., \\textit{a priori} no\nmatter-scalar interaction, the cosmological time evolution of the\nscalar field does generate a nonvanishing coupling to matter.\nThis is due to the quadratic terms entering the current\n(\\ref{Eq3}), since the Christoffel symbols do depend on the\nbackground metric and thereby on the mass of the central body.\n\nWe will discuss in Secs.~\\ref{Sec5} and \\ref{Sec6} which sign\nshould be chosen in solution (\\ref{Eq8}), by studying the\nlarge-distance behavior of the scalar field. We will see in\nparticular that it does not reduce to (\\ref{Eq9}) at\ncosmologically large distances if spacetime is not assumed to be\nasymptotically flat (see Eqs.~(\\ref{Eq30}) to (\\ref{Eq32})). At\npresent, let us just note that the square root entering\n(\\ref{Eq9}) involves a contribution $\\propto 1\/r^3$, dominating\nat small enough distances (still assumed to be much larger than\nthe Schwarzschild radius $r_S$). We define the ``Vainshtein\nradius'' as\n\\begin{equation}\nr_V \\equiv \\left(\\frac{4 k_3 r_S}{M^2 k_2^2}\n\\alpha_\\text{eff}\\right)^{1\/3}.\n\\label{Eq11}\n\\end{equation}\nIt is positive only if $k_3 \\alpha_\\text{eff} > 0$, which is\nalways the case when the bare $\\alpha = 0$, but depends on the\nmodel when $\\alpha \\neq 0$. When $r_V < 0$, then Eqs.~(\\ref{Eq8})\nor (\\ref{Eq9}) simply do not give an acceptable solution for the\nscalar field at radii $r < |r_V|$. This means that the ansatz\n(\\ref{Eq6}) cannot be correct at such small distances, and the\nactual solution must mix its time and radial dependences in a\nmore subtle way. When $r_V$ is positive, on the other hand,\nEq.~(\\ref{Eq8}) or (\\ref{Eq9}) is a correct solution. In the\nrange $r_S \\ll r \\ll r_V$, the scalar field then takes the form\n\\begin{subequations}\n\\label{Eq12}\n\\begin{eqnarray}\n\\varphi' &=& \\mp\\frac{\\text{sign}(k_2 k_3)}{2}\\,\n\\sqrt{\\frac{r_S}{r}}\\sqrt{\\frac{M^2\\alpha_\\text{eff}}{k_3}}\n\\left[1+\\mathcal{O}\\left(\\frac{r_S}{r}\\right)\n+\\mathcal{O}\\left(\\frac{r^3}{r_V^3}\\right)\\right]\n\\label{Eq12a}\\\\\n&=& \\mp\\frac{k_2 M^2}{4 k_3}\n\\sqrt{\\frac{r_V^3}{r}}\n\\left[1+\\mathcal{O}\\left(\\frac{r_S}{r}\\right)\n+\\mathcal{O}\\left(\\frac{r^3}{r_V^3}\\right)\\right].\n\\label{Eq12b}\n\\end{eqnarray}\n\\end{subequations}\nNote in passing that the product $k_3\\varphi'$, which will play\nan important role in the following sections, has a sign which\ndoes not depend on $k_3$ (provided this Vainshtein regime exists,\ni.e., notably that $k_3 \\alpha_\\text{eff} > 0$), but which does\ndepend on the sign of $k_2$, on the contrary.\n\nIt should be underlined that this regime $r_S \\ll r \\ll r_V$ can\nexist even when $\\alpha = 0$, provided\n$|2k_3\\dot\\varphi_c\/k_2M^2| \\gg r_S$. Therefore, although one\ncould have naively deduced from (\\ref{Eq8})-(\\ref{Eq9}) that\n$\\varphi'$ either vanishes or is proportional to $r$ (depending\non the sign to be taken in this solution), Eq.~(\\ref{Eq12})\nactually proves it is proportional to $r^{-1\/2}$. The homogeneous\nscalar field assumed in Ref.~\\cite{Deffayet:2010qz} thus cannot\nbe used in the vicinity of a massive body, when there is a\nnonzero cosmological time evolution $\\dot\\varphi_c \\neq 0$.\nIt would remain possible to assume that $\\alpha_\\text{eff} = 0$,\nbut this would need to fine-tune the cosmologically-driven time\nderivative $\\dot\\varphi_c$ to the precise value\n$(-\\alpha\/k_3)^{1\/2}M$ involving the constants parameters of\naction (\\ref{Eq1}) (while also assuming that $\\alpha k_3 < 0$).\n\n\\section{Perturbations}\n\\label{Sec4}\nLet us now study the perturbations of both the metric and the\nscalar field around the background solution defined by\nEqs.~(\\ref{Eq5}) and (\\ref{Eq8}). We write\n$g_{\\mu\\nu}^\\text{full} = g_{\\mu\\nu} + h_{\\mu\\nu}$ and\n$\\varphi^\\text{full} = \\varphi + \\pi$, where $g_{\\mu\\nu}$ and\n$\\varphi$ denote the background fields, and $h_{\\mu\\nu}$ and\n$\\pi$ their perturbations. Our aim is to identify the pure\nhelicity-2 and 0 degrees of freedom, to analyze in which\nconditions they both carry positive energy, so that any ghost\ninstability is avoided. We expand action (\\ref{Eq1}) to second\norder in the perturbations, and keep only the kinetic terms,\nwhich involve at least two derivatives of these fields. Of\ncourse, terms of the form $f(\\text{background}) h \\nabla\\nabla\n\\pi$ can be integrated by parts, and contribute thus to the\nkinetic terms as $-f(\\text{background}) \\nabla h \\nabla \\pi$.\nBecause of the cubic Galileon interaction in action (\\ref{Eq1}),\none actually also gets a term \\hbox{$\\propto 2\\partial_\\mu\\varphi\n\\partial^\\mu\\pi \\Box\\pi$}, involving \\textit{three} derivatives\nof the scalar perturbation $\\pi$, but this can be rewritten as\n$\\Box\\varphi(\\partial_\\mu\\pi)^2 - 2 \\nabla_\\mu\\partial_\\nu\\varphi\n\\partial^\\mu\\pi \\partial^\\nu\\pi$ by partial integration, as\nexpected because the Galileon field equations (perturbed or not)\nare known to involve at most second derivatives. The final sum of\nall kinetic terms for the perturbations $h_{\\mu\\nu}$ and $\\pi$\nreads\n\\begin{eqnarray}\n\\frac{1}{2M_P^2}\\,\\frac{\\mathcal{L}_2^\\text{kinetic}}{\\sqrt{-g}}\n&=& -\\frac{1}{8} \\nabla_\\mu\nh_{\\alpha\\beta}P^{\\alpha\\beta\\gamma\\delta}\n\\nabla^\\mu h_{\\gamma\\delta}\n+\\frac{1}{8}\\left(h^\\lambda_{\\nu;\\lambda}\n-\\frac{1}{2}h_{,\\nu}\\right)^2\n-\\frac{k_2}{2}\\left(\\partial_\\mu\\pi\\right)^2\n\\nonumber\\\\\n&&-\\frac{k_3}{2M^2}\\biggl[\n2\\Box\\varphi \\left(\\partial_\\mu\\pi\\right)^2\n- 2 \\nabla_\\mu\\partial_\\nu\\varphi\\,\n\\partial^\\mu\\pi \\partial^\\nu\\pi\n+ \\partial_\\mu\\varphi\\partial_\\nu\\varphi\\,\n\\partial_\\lambda\\pi\\nabla^\\lambda h^{\\mu\\nu}\n\\nonumber\\\\\n&&\\hphantom{-\\frac{k_3}{2M^2}}\n- 2\\partial^\\mu\\varphi \\partial^\\nu\\varphi\\,\n\\partial_\\mu\\pi\\, \\left(h^\\lambda_{\\nu;\\lambda}\n-\\frac{1}{2}h_{,\\nu}\\right)\n\\biggr],\n\\label{Eq13}\n\\end{eqnarray}\nwhere $P^{\\alpha\\beta\\gamma\\delta}\\equiv\n\\frac{1}{2}g^{\\alpha\\gamma}\ng^{\\beta\\delta}-\\frac{1}{4}g^{\\alpha\\beta}g^{\\gamma\\delta}$, the\nindices of $h_{\\mu\\nu}$ are raised with the background metric\n$g^{\\rho\\sigma}$, and $h\\equiv h^\\lambda_\\lambda$ is the trace of\n$h_{\\mu\\nu}$. The presence of cross terms $f(\\text{background})\n\\nabla\\pi \\nabla h$ illustrates that the helicity-2 and 0 degrees\nof freedom are mixed. But a simple change of variables actually\nsuffices to diagonalize these kinetic terms. We define\n\\begin{equation}\n\\hbar_{\\mu\\nu} \\equiv h_{\\mu\\nu}\n+\\frac{4 k_3}{M^2}\\left[\\partial_\\mu\\varphi\\partial_\\nu\\varphi\n-\\frac{1}{2}g_{\\mu\\nu}\n\\left(\\partial_\\lambda\\varphi\\right)^2\\right] \\pi,\n\\label{Eq14}\n\\end{equation}\nand Eq.~(\\ref{Eq13}) then takes the form\n\\begin{equation}\n\\frac{1}{2 M_P^2}\\,\\frac{\\mathcal{L}_2^\\text{kinetic}}{\\sqrt{-g}}\n= -\\frac{1}{8} \\nabla_\\mu \\hbar_{\\alpha\\beta}\nP^{\\alpha\\beta\\gamma\\delta}\n\\nabla^\\mu \\hbar_{\\gamma\\delta}\n+\\frac{1}{8}\\left(\\hbar^\\lambda_{\\nu;\\lambda}\n-\\frac{1}{2}\\hbar_{,\\nu}\\right)^2\n-\\frac{1}{2}\\mathcal{G}^{\\mu\\nu}\\partial_\\mu\\pi\\partial_\\nu\\pi,\n\\label{Eq15}\n\\end{equation}\nwhere\n\\begin{equation}\n\\mathcal{G}^{\\mu\\nu} \\equiv\ng^{\\mu\\nu} \\left[k_2 + \\frac{2k_3}{M^2}\\Box\\varphi\n- \\frac{k_3^2}{M^4}\\left(\\partial_\\lambda\\varphi\\right)^4 \\right]\n-\\frac{2k_3}{M^2}\\nabla^\\mu\\partial^\\nu\\varphi\n+4 \\frac{k_3^2}{M^4}\\left(\\partial_\\lambda\\varphi\\right)^2\n\\partial^\\mu\\varphi\\partial^\\nu\\varphi,\n\\label{Eq16}\n\\end{equation}\nproving that $\\hbar_{\\mu\\nu}$ describes a pure helicity-2 field\npropagating in the curved background of $g_{\\mu\\nu}$, and that\nthe pure helicity-0 field $\\pi$ propagates in the effective\nmetric $\\mathcal{G}^{\\mu\\nu}$. Our full diagonalization recovers\nthus this effective metric first derived in\n\\cite{Deffayet:2010qz}, from a triangulation of the kinetic terms\n(using the clever trick of replacing the Ricci tensor entering\nEq.~(\\ref{Eq4}) by its source, obtained from the Einstein\nequations). The $\\mathcal{O}(k_3^2)$ terms are the only\nsubtleties introduced by this diagonalization, but we will see in\nthe following that they are subdominant in natural situations.\nThe crucial information brought by Eq.~(\\ref{Eq15}) is that the\nspin-2 graviton is never a ghost, and that the effective metric\n(\\ref{Eq16}) should be of signature $-+++$ for the scalar\nperturbation $\\pi$ to carry positive energy and have a well-posed\nCauchy problem. We shall perform this analysis in\nSecs.~\\ref{Sec5} and \\ref{Sec6}, for two different asymptotic\nconditions.\n\nBefore entering this discussion, let us underline that our\ndiagonalization (\\ref{Eq15}) also allows us to exhibit the\ninduced matter-scalar coupling already noticed in\nEq.~(\\ref{Eq10}) for the background solution. We can show now\nthat the scalar perturbation $\\pi$ is also directly coupled to\nmatter, even when the bare coupling constant $\\alpha$ vanishes.\nIndeed, since matter is assumed to be minimally coupled to\n$\\tilde g^\\text{full}_{\\mu\\nu}\n= \\text{exp}(2\\alpha\\varphi^\\text{full}) g^\\text{full}_{\\mu\\nu}$\n(where ``full'' means as before the background fields plus their\nperturbations), the action of a point particle reads\n\\begin{eqnarray}\nS_\\text{matter} &=& -\\int mc\n\\left(\\tilde g^\\text{full}_{\\mu\\nu} dx^\\mu dx^\\nu\\right)^{1\/2}\n\\nonumber\\\\\n&=& -\\int mc\\, e^{\\alpha \\varphi}\n\\left[1+\\alpha\\pi+\\mathcal{O}(\\pi^2)\\right]\n\\left(g_{\\mu\\nu} dx^\\mu dx^\\nu\\right)^{1\/2}\n\\left[1-\\frac{1}{2}h_{\\rho\\sigma}\nu^\\rho u^\\sigma+\\mathcal{O}(h^2)\\right],\n\\label{Eq17}\n\\end{eqnarray}\nwhere $u^\\lambda \\equiv dx^\\lambda\/ (g_{\\mu\\nu} dx^\\mu\ndx^\\nu)^{1\/2}$ denotes the unit 4-velocity of the particle (with\nrespect to the background metric $g_{\\mu\\nu}$). Although there\ndoes not seem to exist any linear coupling to $\\pi$ when $\\alpha\n= 0$, we actually know that $h_{\\mu\\nu}$ describes a mixing of\nspin-2 and spin-0 degrees of freedom. If we replace it in terms\nof $\\pi$ and the actual spin-2 excitation $\\hbar_{\\mu\\nu}$,\nEq.~(\\ref{Eq14}), we thus get\n\\begin{eqnarray}\nS_\\text{matter} = -\\int mc\\, \\left(\\tilde g_{\\mu\\nu}\ndx^\\mu dx^\\nu\\right)^{1\/2}\n\\biggl[1&+&\\left\\{\\alpha\n+\\frac{2k_3}{M^2}\\left(u^\\lambda\\partial_\\lambda\\varphi\\right)^2\n+\\frac{k_3}{M^2}\\left(\\partial_\\lambda\\varphi\\right)^2\n\\right\\}\\pi\n\\nonumber\\\\\n&-&\\frac{1}{2}\\hbar_{\\rho\\sigma}u^\\rho u^\\sigma\n+\\mathcal{O}\\left((\\hbar_{\\rho\\sigma},\\pi)^2\\right)\\biggr].\n\\label{Eq18}\n\\end{eqnarray}\nThe quantity within curly brackets plays the role of an effective\nlinear coupling constant of matter to the scalar perturbation\n$\\pi$, and it can be nonzero even if $\\alpha = 0$. It should\n\\textit{a priori} not be confused with $\\alpha_\\text{eff}$,\ndefined in Eq.~(\\ref{Eq10}), which described the effective\ncoupling of the \\textit{background} scalar field $\\varphi$ to the\nmatter source. However, at lowest post-Newtonian order, they\nactually coincide when they are constant, notably in a background\nsuch that $\\dot\\varphi_c\\neq 0$ but $\\partial_r\\varphi = 0$.\nIndeed, we have $-\\left(\\partial_\\lambda\\varphi\\right)^2 =\ne^{-\\nu}\\dot\\varphi_c^2 =\n\\left(u^\\lambda\\partial_\\lambda\\varphi\\right)^2 +\\mathcal{O}(v)$,\nwhere $v$ is the particle's velocity, so that the quantity within\ncurly brackets in (\\ref{Eq18}) reduces to (\\ref{Eq10}).\n\n\\section{Asymptotic Minkowski spacetime}\n\\label{Sec5}\nWe consider in this section that spacetime is asymptotically\nMinkowskian, i.e., we impose that the spherically symmetric\nmetric (\\ref{Eq5}) is given by the Schwarzschild solution $e^\\nu\n= e^{-\\lambda} = 1-r_S\/r$, and we consistently assume that the\nbackground scalar field has no cosmological time evolution,\n$\\dot\\varphi_c = 0$. Then the upper sign of solution (\\ref{Eq8}),\n(\\ref{Eq9}) or (\\ref{Eq12}) must be discarded, otherwise the\nscalar field would diverge at spatial infinity (as well as its\nderivative and its energy-momentum tensor). With the lower\nsign, Eq.~(\\ref{Eq9}) gives for $r\\rightarrow\\infty$ the standard\nbehavior of a Brans-Dicke scalar field,\n\\begin{subequations}\n\\label{Eq19}\n\\begin{eqnarray}\n\\varphi' &=& \\frac{\\alpha r_S}{2 k_2 r^2}\n\\left[1+\\mathcal{O}\\left(\\frac{r_S}{r}\\right)\n+\\mathcal{O}\\left(\\frac{r_V^3}{r^3}\\right)\\right]\n\\label{Eq19a}\\\\\n\\Rightarrow\\quad\\varphi &=&\n\\varphi_0 - \\frac{\\alpha r_S}{2 k_2 r}\n\\left[1+\\mathcal{O}\\left(\\frac{r_S}{r}\\right)\n+\\mathcal{O}\\left(\\frac{r_V^3}{r^3}\\right)\\right].\n\\label{Eq19b}\n\\end{eqnarray}\n\\end{subequations}\nIn this large-distance regime, the perturbations of the scalar\nfield also behave as in Brans-Dicke theory, i.e., they propagate\nin an effective metric (\\ref{Eq16}) dominated by its standard\n$k_2g^{\\mu\\nu}$ contribution. In conclusion, $k_2$ \\textit{must}\nbe positive for the scalar degree of freedom to carry positive\nenergy in the asymptotic $r\\rightarrow\\infty$ region.\n\nOn the other hand, in the Vainshtein regime $r\\ll r_V$ (which\nexists only if $k_3\\alpha > 0$), the background scalar field\ntakes the form (\\ref{Eq12}) with its lower sign, and the\neffective metric (\\ref{Eq16}), in which scalar perturbations\npropagate, is dominated by its $k_3$ contributions. Indeed,\nwe have $(2k_3\/M^2)\\Box\\varphi \\approx \\frac{3}{4}\\, k_2\n(r_V\/r)^{3\/2} \\gg k_2 > 0$. Similarly, the\n$-(2k_3\/M^2)\\nabla^\\mu\\partial^\\nu\\varphi$ contribution to\n$\\mathcal{G}^{\\mu\\nu}$ gives at lowest post-Newtonian order\n$\\frac{1}{4}\\, k_2 (r_V\/r)^{3\/2}$ for the $\\mathcal{G}^{rr}$\ncomponent, minus twice this expression (multiplied by\n$g^{\\theta\\theta}$ and $g^{\\phi\\phi}$) for the angular\n$\\mathcal{G}^{\\theta\\theta}$ and $\\mathcal{G}^{\\phi\\phi}$\ncomponents, and again this expression but multiplied by a\nnegligible factor $r_S\/r$ for the $\\mathcal{G}^{00}$ component.\nFinally, if we assume that $\\alpha^2\/k_2$ is at most of order one\n($\\alpha^2\/k_2 \\ll 1$ still being allowed), in order to avoid a\ntoo large matter-scalar coupling in the asymptotic\n$r\\rightarrow\\infty$ region, then the assumption $r_S \\ll r \\ll\nr_V$ implies $(k_3\\varphi'^2\/M^2)^2 \\approx (\\alpha r_S\/4 r)^2\n\\ll k_2 \\ll (2k_3\/M^2)\\Box\\varphi$, therefore the\n$\\mathcal{O}(k_3^2)$ terms entering the effective metric\n(\\ref{Eq16}) are fully negligible: They are of second\npost-Newtonian order beyond the Vainshtein effect. In conclusion,\nthe effective metric (\\ref{Eq16}) reads at lowest order\n\\begin{equation}\n\\mathcal{G}^{\\mu\\nu}_\\text{Vainshtein} \\approx\n\\frac{k_2}{4} \\left(\\frac{r_V}{r}\\right)^{3\/2}\n\\text{diag}\\left(-3, 4, \\frac{1}{r^2},\n\\frac{1}{r^2 \\sin^2\\theta}\\right),\n\\label{Eq20}\n\\end{equation}\nand is has therefore the right $-+++$ signature, warranting that\nthe Cauchy problem is well posed and that the scalar field\ncarries positive energy. The large numerical factor\n$(r_V\/r)^{3\/2}$ implies that scalar perturbations are weakly\ncoupled to matter; this is a consequence of the Vainshtein\nmechanism. The different integers entering the diagonal matrix\nmean that the speed of scalar waves is $c\/\\sqrt{3}$ in the\northoradial directions, but $2c\/\\sqrt{3}$ in the radial one. As\ndiscussed in \\cite{Bruneton:2006gf,Babichev:2007dw}, this\nsuperluminal radial velocity does not violate causality precisely\nbecause the Cauchy problem remains well posed simultaneously for\nall the fields: At any spacetime point, there exists a\nhypersurface which is spacelike with respect to both $g^{\\mu\\nu}$\nand $\\mathcal{G}^{\\mu\\nu}$, on which one may specify initial\ndata, and such hypersurfaces may be used to foliate the full\nspacetime.\n\nTo conclude, the curved-spacetime Galileon model (\\ref{Eq1}) in\nan asymptotic Minkowskian Universe is consistent both in the\nVainshtein regime ($r\\ll r_V$) and at large distances, provided\n$k_2 >0$ (and $k_3\\alpha > 0$ for the Vainshtein regime to\nexist). For such an asymptotic Minkowski spacetime, the\nbackground scalar field is given by solution (\\ref{Eq8}),\n(\\ref{Eq9}) or (\\ref{Eq12}) with their \\textit{lower} sign.\n\nThis conclusion also assumes that $\\alpha^2\/k_2$ is not an\nextremely large dimensionless number, i.e., that matter is not\ntoo strongly coupled to the scalar field. If $|\\alpha| \\gg\n(|k_3|\/ M^2 r_S^2)^{1\/3}$, then the $\\mathcal{O}(\\varphi'^4)$\ncontributions dominate in the effective metric (\\ref{Eq16}) at\nsmall enough distances, and its signature becomes $++--$ instead\nof $-+++$. Scalar perturbations are thus ill behaved, not only\nbecause their energy can be negative, but above all because their\nfield equation is not hyperbolic. However, the present study\nassumed from the beginning that the backreaction of the scalar\nfield on the Schwarzschild metric was negligible, which is\nobviously not the case in the very strong matter-scalar coupling\nlimit. Therefore, this non-hyperbolic signature does not even\nprove the inconsistency of the Galileon model (\\ref{Eq1}) in this\nlimit. It just means that we \\textit{must} assume as above that\n$\\alpha^2\/k_2$ is at most of order one, otherwise the scalar\nbackreaction on the metric cannot be neglected.\n\nTo estimate this backreaction, it suffices to compare the\nscalar's energy-momentum tensor\n\\begin{eqnarray}\n\\frac{1}{M_P^2}\\, T_{\\mu\\nu}(\\varphi) &=&\nk_2 \\Bigl[2\\,\\partial_\\mu\\varphi\\,\\partial_\\nu\\varphi\n- g_{\\mu\\nu}\n\\left(\\partial_\\lambda\\varphi\\right)^2\\Bigr]\n+\\frac{k_3}{M^2}\\Bigl[\n2\\,\\Box\\varphi\\, \\partial_\\mu\\varphi\\,\\partial_\\nu\\varphi\n\\nonumber\\\\\n&&+ g_{\\mu\\nu}\\partial_\\rho\\varphi\\,\n\\nabla^\\rho\\left(\\partial_\\lambda\\varphi\\right)^2\n- \\partial_\\mu\\varphi\\,\n\\nabla_\\nu\\left(\\partial_\\lambda\\varphi\\right)^2\n- \\partial_\\nu\\varphi\\,\n\\nabla_\\mu\\left(\\partial_\\lambda\\varphi\\right)^2\n\\Bigr],\n\\label{Tmunu}\n\\end{eqnarray}\nwith the one of the matter source, or more specifically their\nspatial integrals $\\int_0^r \\left(-T_0^0+T_i^i\\right) 4\\pi r'^2\ndr'$, which generate the Newtonian potential $\\frac{1}{2}(g_{00}\n+1)$ at a distance $r$ from the center of the body. For any $r$,\none then finds that the scalar's contribution is\n$\\mathcal{O}\\left( \\alpha^{5\/3} (M r_S)^{2\/3}\\right)$ smaller\nthan that of matter. Our analysis can thus be trusted only if\n$\\alpha$ is not too large, so that it does not compensate the\nfactor $(M r_S)^{2\/3}$. On the other hand, even if the\nmatter-scalar coupling is of order one (i.e., $\\alpha^2\/k_2 \\sim\n1$), then the scalar's backreaction is negligible as soon as\n$1\/M$ is chosen large enough with respect to the Schwarzschild\nradius of any body. In Galileon models, $M$ is generally assumed\nto be of the order of the Hubble constant $H$ (and we will\nactually derive so in Sec.~\\ref{Sec6} below, but while assuming a\ndifferent asymptotic Universe). Then $M r_S$ is always an\nextremely small number, about $10^{-23}$ for the Sun, $10^{-11}$\nfor a galaxy, and still $10^{-8}$ for a cluster. Therefore, our\ntest scalar field approximation is fully safe if $M\\sim H$ and\n$\\alpha^2\/k_2$ is at most of order one.\n\n\\section{Asymptotic de Sitter spacetime}\n\\label{Sec6}\nLet us first consider an isotropic and homogeneous Universe,\ndescribed by the Friedmann-Lema\\^{\\i}tre-Robertson-Walker (FLRW)\nmetric $ds^2 = -d\\tau^2 +\na(\\tau)^2\\left(d\\rho^2+\\rho^2d\\Omega^2\\right)$, where the\ncosmological time $\\tau$ and the comoving radius $\\rho$ should\nnot be confused with the time $t$ and radial coordinate $r$ of\nthe Schwarzschild metric (\\ref{Eq5}). Since action (\\ref{Eq1})\ndoes not involve any cosmological constant, the expansion of the\nUniverse will be caused by the Galileon field $\\varphi$ itself.\nAs was shown in \\cite{Deffayet:2010qz}, the combined Einstein and\nscalar field equations indeed admit a self-accelerating solution.\nLet us assume, like in Eq.~(\\ref{Eq6}) above, that the scalar\nfield has a linear time dependence, $\\varphi = \\dot\\varphi_c \\tau\n+\\varphi_0$, i.e., that $\\ddot\\varphi = 0$ (where a dot denotes a\nderivative with respect to $\\tau$). Then the field equations read\n\\begin{subequations}\n\\label{Eq21}\n\\begin{eqnarray}\n3 H^2 &=& \\frac{\\varepsilon}{M_P^2}\n+ k_2 \\dot\\varphi_c^2 - 6 H\\,\\frac{k_3}{M^2}\\,\\dot\\varphi_c^3,\n\\label{Eq21a}\\\\\n-\\dot H &=&\\frac{\\varepsilon+p}{2M_P^2}\n+k_2\\dot\\varphi_c^2-3 H\\,\\frac{k_3}{M^2}\\,\\dot\\varphi_c^3,\n\\label{Eq21b}\\\\\n\\frac{\\partial_\\tau\\left(a^3 J^0\\right)}{a^3}\n&=& \\alpha (\\varepsilon-3p),\n\\label{Eq21c}\n\\end{eqnarray}\n\\end{subequations}\nwhere $H\\equiv \\dot a\/a$ is the Hubble parameter, and\n$\\varepsilon$ and $p$ denote the energy density and pressure\nof matter, that we shall neglect in the following. [Actually,\nEqs.~(\\ref{Eq21a}) and (\\ref{Eq21c}) are valid even if\n$\\ddot\\varphi\\neq 0$.] The only\nChristoffel contribution to the current (\\ref{Eq3}) comes from\n$\\Box\\varphi = -3H\\dot\\varphi_c$, and Eq.~(\\ref{Eq21c}) reduces\nto\n\\begin{equation}\n\\partial_\\tau\\left(k_2 a^3 \\dot\\varphi_c\n-3 H \\frac{k_3}{M^2}\\,a^3 \\dot\\varphi_c^2\\right) = 0,\n\\label{Eq22}\n\\end{equation}\nwhich admits the solution $\\dot\\varphi_c = k_2M^2\/3Hk_3$.\n[In an expanding Universe, the other solutions, $J^0 =\n\\text{const}\/a^3 \\rightarrow 0$, tend either towards this\none or towards $\\dot\\varphi_c = 0$.] Then\nEq.~(\\ref{Eq21b}) shows that the scalar-field analogue of\n$\\varepsilon+p$ vanishes, i.e., that it behaves as an effective\ncosmological constant, and we consistently find that $H$ is also\nconstant. Finally, Eq.~(\\ref{Eq21a}) reads $3 H^2 =\n-k_2\\dot\\varphi_c^2$, implying that $k_2$ \\textit{must} be\nnegative for this self-accelerating solution to exist, and it\ngives the numerical value\n\\begin{equation}\nH^2 = \\frac{M^2}{|k_3|}\\left(\\frac{|k_2|}{3}\\right)^{3\/2}.\n\\label{Eq23}\n\\end{equation}\nThis allows us to rewrite $\\dot\\varphi_c$ in terms of the\nconstants entering action (\\ref{Eq1}). Assuming an expanding\nUniverse (i.e., $H > 0$), we get\n\\begin{equation}\n\\dot\\varphi_c = -\\text{sign}(k_3)\n\\left(\\frac{|k_2|}{3}\\right)^{1\/4} \\frac{M}{\\sqrt{|k_3|}}.\n\\label{Eq24}\n\\end{equation}\n\nLet us now consider a static and spherically symmetric body\nembedded in such an expanding Universe. As before, we assume that\nthe backreaction of the \\textit{local} scalar field on the metric\nis negligible, but its cosmological time evolution (\\ref{Eq24})\nis obviously taken into account since it is responsible for the\naccelerated expansion. The local scalar-field solution can still\nbe written in the form (\\ref{Eq8}), (\\ref{Eq9}) or (\\ref{Eq12}),\nat least at small enough distances, and the effective metric in\nwhich scalar perturbations propagate is still given by\nEq.~(\\ref{Eq16}). We saw in Sec.~\\ref{Sec5} that the scalar\nperturbations are ghostlike at large distances if $k_2 < 0$, but\nthis was derived while assuming asymptotic flatness and thereby\nthe Brans-Dicke like behavior (\\ref{Eq19}) of the background\nscalar field at infinity. Therefore this previous result is no\nlonger valid in the present de Sitter Universe. However, the\nsmall-distance physics remains \\textit{a priori} unchanged,\nnotably within the Vainshtein radius, and we saw that the\nsignature of the effective metric (\\ref{Eq16}) depended on the\nsign of $k_3 \\Box\\varphi$. As mentioned below Eq.~(\\ref{Eq12}),\nthis sign actually does not depend on $k_3$, but it does depend\non $k_2$, and Eq.~(\\ref{Eq20}) confirms that we \\textit{a priori}\nget a ghostlike scalar perturbation if $k_2 < 0$. The stability\nof the self-accelerating solution seems thus spoiled by the\npresence of any massive body.\n\nA first way out would be to consider the fine-tuned model such\nthat $\\alpha_\\text{eff} = 0$. Then the scalar field would not be\nperturbed at all by local matter, there would not exist any\nVainshtein regime, and the proof in Ref.~\\cite{Deffayet:2010qz}\nthat the model is stable would then be valid. This assumption\n$\\alpha_\\text{eff} = 0$ actually seems more natural in the\npresent self-accelerating Universe, since the condition\n$\\dot\\varphi_c = (-\\alpha\/k_3)^{1\/2}M$ derived at the end of\nSec.~\\ref{Sec3} translates as $\\alpha =\n-\\text{sign}(k_3)\\sqrt{|k_2|\/3}$, involving only constant\nparameters. However, if we take into account the matter sources\nin Eqs.~(\\ref{Eq21}), to derive a more realistic expansion of the\nUniverse, then neither $H$ nor $\\dot\\varphi_c$ will remain\nconstant, and we will eventually reach an epoch for which\n$\\alpha$ is no longer tuned to the right numerical value.\nMoreover, the self-gravity effects mentioned in footnote\n\\ref{foot2} mean that the bare coupling constant $\\alpha$ needs\nto be replaced by a body-dependent product $\\alpha (1-2s)$, which\ncannot be fine-tuned to $-\\text{sign}(k_3)\\sqrt{|k_2|\/3}$ for all\nbodies. Therefore, even for this specific model, one expects that\nthe scalar field will be directly coupled to matter with a\nnonvanishing $\\alpha_\\text{eff}$, at least at some cosmological\nepoch, and this seems to generically lead to ghost instabilities\nbecause $k_2 < 0$.\n\nIn fact, the small-distance effective metric (\\ref{Eq20}) is\n\\textit{not} correct in the present expanding Universe. Indeed,\nsolution (\\ref{Eq12}) does depend on the sign of $k_2$, but also\non the global $\\mp$ sign. We did prove in Sec.~\\ref{Sec5} that\nthe lower ($+$) sign gave the correct solution in an\nasymptotically Minkowskian Universe, but this is no longer the\ncase in a de Sitter one. This comes from the fact that the FLRW\ncoordinates $\\tau$ and $\\rho$, useful at cosmologically large\ndistances, do not coincide with the $t$ and $r$ coordinates\ndefining the static and spherically-symmetric local metric\n(\\ref{Eq5}). In presence of a cosmological constant $\\Lambda = 3\nH^2$, here mimicked by the self-accelerating solution\n(\\ref{Eq23}), we do know that the exact Schwarzschild-de Sitter\nsolution can be written as Eq.~(\\ref{Eq5}) with\n\\begin{equation}\ne^\\nu = e^{-\\lambda} = 1-r_S\/r - (H r)^2.\n\\label{Eq25}\n\\end{equation}\nThis can be matched to an asymptotic FLRW coordinate system by\ndefining\n\\begin{equation}\n\\begin{aligned}[c]\n\\tau&=t+\\frac{1}{2H}\\ln\\left[1-(Hr)^2\\right],\\\\\n\\rho&=\\frac{e^{-Ht}}{\\sqrt{1-(Hr)^2}}\\, r\\,,\n\\end{aligned}\n\\qquad\\Leftrightarrow\\qquad\n\\begin{aligned}[c]\nt&=\\tau-\\frac{1}{2H}\\ln\\left[1\n-\\left(H e^{H\\tau}\\rho\\right)^2\\right],\\\\\nr&=e^{H\\tau}\\rho\\,.\n\\end{aligned}\n\\label{Eq26}\n\\end{equation}\nLet us also set\n\\begin{equation}\nB\\equiv 1 - \\frac{r_S}{e^{H\\tau} \\rho}\n- \\left(H e^{H\\tau} \\rho \\right)^2.\n\\label{Eq27}\n\\end{equation}\nThen the Schwarzschild-de Sitter metric (\\ref{Eq5})-(\\ref{Eq25})\nbecomes\n\\begin{subequations}\n\\label{Eq28}\n\\begin{eqnarray}\nds^2 &=& -B\n\\left[\\frac{d\\tau+H e^{2H\\tau}\\rho\\,\nd\\rho}{1-\\left(H e^{H\\tau}\\rho\\right)^2}\\right]^2\n+ e^{2H\\tau}\\left[\\frac{\\left(d\\rho+H\\rho\\,\nd\\tau\\right)^2}{B} + \\rho^2 d\\Omega^2\\right]\n\\label{Eq28a}\\\\\n&=&-\\left[1-\\frac{r_S}{H^2\\left(e^{H\\tau}\n\\rho\\right)^3}+\\mathcal{O}\n\\left(\\frac{1}{\\rho^4}\\right)\\right]d\\tau^2\n+\\left[\\frac{4r_S}{H^3e^{3 H\\tau}\\rho^4}\n+\\mathcal{O}\\left(\\frac{1}{\\rho^5}\\right)\\right]d\\tau\\, d\\rho\n\\nonumber\\\\\n&&+e^{2H\\tau}\n\\left\\{\\left[1+\\frac{r_S}{H^2\\left(e^{H\\tau}\\rho\\right)^3}\n+\\mathcal{O}\\left(\\frac{1}{\\rho^4}\\right)\\right]d\\rho^2\n+\\rho^2 d\\Omega^2\\right\\},\n\\label{Eq28b}\n\\end{eqnarray}\n\\end{subequations}\nin which one recognizes the asymptotic de Sitter metric $ds^2 =\n-d\\tau^2 + e^{2H\\tau}\\left(d\\rho^2+\\rho^2d\\Omega^2\\right)$ up to\ncorrections of order $\\mathcal{O}(r_S)$ caused by the local\nmassive body. Note that we merely changed coordinates, and that\nEq.~(\\ref{Eq28}) still defines the same \\textit{static} solution\nas (\\ref{Eq5})-(\\ref{Eq25}), in spite of the presence of\ntime-dependent exponentials. In particular, the Schwarzschild\nradius still corresponds to the constant $e^{H\\tau}\\rho = r_S$,\nthe factor $e^{H\\tau}$ compensating the fact that we now measure\nlengths with a varying (comoving) ruler. Note also that the\nspherical body is located at $\\rho = 0$ at any time, therefore it\nis comoving (actually static) in the background de~Sitter\nUniverse.\n\nTo derive the cosmological solution (\\ref{Eq23})-(\\ref{Eq24}), we\nassumed that the scalar field is homogeneous at large distances\nin de~Sitter coordinates, i.e., that it reads $\\varphi =\n\\dot\\varphi_c \\tau +\\varphi_0$ without any comoving radius\n($\\rho$) dependence. This means that in terms of the\nSchwarzschild coordinates $t$ and $r$, Eqs.~(\\ref{Eq26}),\n$\\varphi$ does depend on $r$. We explicitly get\n\\begin{eqnarray}\n\\varphi &=& \\dot\\varphi_c t\n+\\frac{\\dot\\varphi_c}{2H}\\ln\\left[1-(Hr)^2\\right]\n+ \\varphi_0\n\\nonumber\\\\\n\\Rightarrow\\quad\n\\varphi'&=& -\\dot\\varphi_c\\, \\frac{Hr}{1-(Hr)^2}\n= -\\frac{k_2 M^2}{3 k_3}\\,\\frac{r}{1-(Hr)^2},\n\\label{Eq29}\n\\end{eqnarray}\n$H^2$ being given by Eq.~(\\ref{Eq23}). In other words, the\ncorrect solution (\\ref{Eq8}) for the background scalar field\n$\\varphi'$, embedded in a self-accelerating Universe, should\ncontain a local $r$ dependence, and with the precise factor\n$-k_2M^2\/3k_3$ entering Eq.~(\\ref{Eq29}). Let us prove that this\nresult is given by the \\textit{upper} sign of (\\ref{Eq8}).\nIndeed, this solution can no longer be expanded as (\\ref{Eq9})\nfor too large radii $r$, because expressions (\\ref{Eq25}) for\n$\\nu(r)$ and $\\lambda(r)$ must now be used, so that $\\nu' e^{\\nu}\n= -\\lambda' e^{\\nu} = r_S\/r^2 - 2 H^2 r$. We get for such large\ndistances (still assumed smaller than $1\/\\sqrt{3}H$)\n\\begin{equation}\n\\varphi' =\n-\\frac{k_2 M^2}{4 k_3}\\,\n\\frac{r}{1-\\frac{3}{2}(Hr)^2}\n\\left[1\\pm\\frac{1}{3}\\,\\frac{1-3(Hr)^2}{1-(Hr)^2}\n\\right]\\times\n\\left[1+\\mathcal{O}\\left(\\frac{r_S}{r}\\right)\\right],\n\\label{Eq30}\n\\end{equation}\ntherefore the upper sign recovers \\textit{exactly} the asymptotic\nbehavior derived in Eq.~(\\ref{Eq29}) from a matching with the\ncosmological solution, whereas the lower sign gives this result\ndivided by $\\left[2-3(Hr)^2\\right]$, i.e., with not only an\nerroneous factor $\\frac{1}{2}$ by also an incorrect radial\ndependence.\n\nSolution (\\ref{Eq8}), with its correct upper sign, may now be\nwritten for $Hr\\ll1$ as\n\\begin{equation}\n\\varphi' =\n-\\frac{k_2 M^2r}{4 k_3}\n\\left(1+\\sqrt{1-\\frac{8}{9}+\\frac{4 k_3 r_S}{k_2^2 M^2 r^3}\\,\n\\alpha_\\text{eff}}\\right)\\times\n\\left[1+\\mathcal{O}\\left(\\frac{r_S}{r}\\right)\n+\\mathcal{O}\\left(H^2r^2\\right)\\right],\n\\label{Eq31}\n\\end{equation}\nwhere the constant $-\\frac{8}{9}$ within the square root comes\n{}from a compensation between the $r_S\/r^3$ coefficient entering\n(\\ref{Eq8}) and the large-distance behavior of $\\nu' r^2\/r_S$,\nwhile using the values (\\ref{Eq23}) and (\\ref{Eq24}) for $H^2$\nand $\\dot\\varphi_c$. Note that this solution is different from\nEq.~(\\ref{Eq9}), which was valid for an asymptotically\nMinkowskian spacetime. In particular, (\\ref{Eq31}) does not\nreduce to the Brans-Dicke solution (\\ref{Eq19a}) for $r \\gg r_V$\n(while still assuming $r\\ll 1\/H$), but gives\n\\begin{equation}\n\\varphi' =\n-\\left(\\frac{k_2 M^2r}{3 k_3}\n+\\frac{3\\,\\alpha_\\text{eff}\\, r_S}{2k_2r^2}\\right)\\times\n\\left[1+\\mathcal{O}\\left(\\frac{r_S}{r}\\right)\n+\\mathcal{O}\\left(\\frac{r_V^6}{r^6}\\right)\n+\\mathcal{O}\\left(H^2r^2\\right)\\right],\n\\label{Eq32}\n\\end{equation}\nwith different sign and numerical coefficient for the usual\n$\\mathcal{O}(1\/r^2)$ contribution, but above all involving a much\nlarger $\\mathcal{O}(r)$ term, that we already found in\nEq.~(\\ref{Eq29}) [it is $\\mathcal{O}\\left(r^3\/r_V^3\\right)$\nlarger than the $\\mathcal{O}(1\/r^2)$ contribution].\n\nIn the Vainshtein regime ($r_S \\ll r \\ll r_V$), on the other\nhand, we still get Eq.~(\\ref{Eq12}), but with the crucial\ninformation that the upper sign must be chosen:\n\\begin{equation}\n\\varphi' =\n-\\frac{k_2 M^2}{4 k_3}\n\\sqrt{\\frac{r_V^3}{r}}\n\\left[1+\\mathcal{O}\\left(\\frac{r_S}{r}\\right)\n+\\mathcal{O}\\left(\\frac{r^3}{r_V^3}\\right)\n+\\mathcal{O}\\left(H^2r^2\\right)\\right].\n\\label{Eq33}\n\\end{equation}\nThis is the opposite expression as the one we used in\nSec.~\\ref{Sec5} for an asymptotically Minkowskian spacetime, and\nthe effective metric in which scalar perturbations propagate, at\nsmall distances, is thus the opposite of (\\ref{Eq20}) [up to\nnegligible corrections of relative order\n$\\mathcal{O}(\\sqrt{r_S\/r})$ or $\\mathcal{O}(\\sqrt{r^3\/r_V^3})$,\ndue to the nonvanishing time derivative $\\dot\\varphi_c$ we now\nmust take into account when computing (\\ref{Eq16})]. Since we\nknow that $k_2 < 0$ for the present self-accelerating Universe to\nbe a solution, this effective metric is therefore again of the\ncorrect signature $-+++$, warranting a well-posed Cauchy problem\nand the absence of ghosts.\n\nThe signature of this effective metric must also be studied at\nlarge distances $r \\gg r_V$. If one neglects all contributions\nproportional to the Schwarzschild radius $r_S$ of the massive\nbody, $\\mathcal{G}^{\\mu\\nu}$ is most conveniently computed in\nFLRW coordinates. Then one finds that all contributions to\nEq.~(\\ref{Eq16}) are of the same order of magnitude\n$\\mathcal{O}(k_2)$, including those of the usually negligible\n$(\\partial\\varphi)^4$ terms. The effective metric finally reads\n$\\mathcal{G}^{\\mu\\nu} = \\text{diag}(-2|k_2|,0,0,0) +\n\\mathcal{O}(r_S)$, meaning that the Cauchy problem remains well\nposed for scalar perturbations, but that their sound velocity\nvanishes, as was already noticed in \\cite{Deffayet:2010qz}. To\navoid any instability, the $\\mathcal{O}(r_S)$ corrections should\nthus contribute positively to the effective spatial metric\n$\\mathcal{G}^{ij}$. We thus need to compute them, while taking\ninto account the zeroth-order terms $\\mathcal{O}(r_S^0)$ although\nwe know they will eventually cancel. Indeed some of these\nzeroth-order terms get multiplied by $r_S$ when solution\n(\\ref{Eq8}) is expanded to compute (\\ref{Eq16}). Another\ncomplication is that $\\mathcal{G}^{0r} \\neq 0$ in Schwarzschild\ncoordinates (\\ref{Eq5})-(\\ref{Eq25}), i.e., that\n$\\mathcal{G}^{\\mu\\nu}$ is not diagonal. However, to determine its\nsignature, it suffices to consider the quadratic form\n$\\mathcal{G}_{\\mu\\nu} dx^\\mu dx^\\nu$ (where the effective\nmetric's indices are lowered with $g_{\\mu\\nu}$, i.e.,\n$\\mathcal{G}_{\\mu\\nu}$ is \\textit{not} the inverse of\n$\\mathcal{G}^{\\mu\\nu}$), and to note that\n$\\mathcal{G}_{00} dt^2 + 2 \\mathcal{G}_{0r} dt dr\n+ \\mathcal{G}_{rr} dr^2 = \\mathcal{G}_{00}\n\\left(dt+ \\mathcal{G}_{0r} dr\/\\mathcal{G}_{00}\\right)^2\n+ \\left(\\mathcal{G}_{rr}\n- \\mathcal{G}_{0r}^2\/\\mathcal{G}_{00}\\right) dr^2$.\nIn this new frame diagonalizing the effective metric, one finally\nfinds at lowest order $\\mathcal{G}_{\\mu\\nu} \\approx\n\\text{diag}\\left( -2|k_2| , 2 |k_2| \\epsilon, -|k_2| \\epsilon\nr^2, -|k_2| \\epsilon r^2\\sin^2\\theta \\right)$, where $\\epsilon\n\\equiv \\frac{3}{4}(r_V\/r)^3$, i.e., a signature $-+--$ instead of\n$-+++$, implying that the Cauchy problem is ill-posed for scalar\nperturbations propagating in the orthoradial directions (or that\nthey are unstable if one interprets these negative signs as an\nimaginary sound speed, such instabilities being \\textit{large} at\nmoderately large distances of a few $r_V$). In conclusion,\nalthough we found above that the physics of scalar perturbations\nis consistent in the Vainshtein regime $r \\ll r_V$, it is\n\\textit{not} at large distances $r \\gg r_V$.\n\nThis serious problem can \\textit{a priori} have several\nsolutions. The first naive idea would be to choose the theory\nparameters such that $r_V \\agt 1\/H$, so that any instability\nwould be hidden behind the cosmological horizon. But since $r_V\n\\propto r_S^{1\/3}$, this cannot work for all massive bodies, and\nthis would need anyway a very large matter-scalar coupling\nconstant $\\alpha$, thereby spoiling the hyperbolicity in the\nVainshtein regime for the same reasons as at the end of\nSec.~\\ref{Sec5} (and being inconsistent with our assumption of a\nnegligible backreaction of the local scalar field on the\nbackground metric). A much better argument is that the scalar\nfield may never get out of the Vainshtein regime if one takes\ninto account the full matter distribution in the Universe: At any\nlocation, there would be a close enough and massive enough body\nsuch that $r < r_V$. But the strongest argument is that the\nvanishing sound speed found above at order $\\mathcal{O}(r_S^0)$\nis a consequence of the exact self-accelerating solution\n(\\ref{Eq23})-(\\ref{Eq24}). As soon as one takes into account\nmatter in the field equations (\\ref{Eq21}), then the sound speed\nbecomes nonvanishing and real, as was shown in\n\\cite{Deffayet:2010qz}, and it dominates at large distances over\nany $\\mathcal{O}(r_S)$ correction. For instance, if we assume\n$\\alpha = 0$ like in \\cite{Deffayet:2010qz}, the presence of a\npositive matter density $\\varepsilon$ in Eq.~(\\ref{Eq21a})\nincreases the value of $H$ with respect to the lowest-order\nexpression (\\ref{Eq23}). This implies a modification of\nEq.~(\\ref{Eq24}) for $\\dot\\varphi_c$, and the matter field\nequation $\\dot\\varepsilon + 3 H \\varepsilon = 0$ (assuming a\nvanishing pressure $p$) allows us to compute $\\ddot\\varphi_c$.\nWe finally find that the terms proportional to $\\Box\\varphi$ and\n$\\left(\\partial_\\lambda\\varphi\\right)^4$ in Eq.~(\\ref{Eq16}) both\ngive \\textit{positive} $\\mathcal{O}(\\varepsilon)$ contributions\nto the spatial effective metric $\\mathcal{G}^{ij}$, and we\nexplicitly obtain\n\\begin{equation}\n\\mathcal{G}^{ij} = \\frac{5}{18}\\, |k_2|\\, g^{ij}\n\\frac{\\varepsilon}{M_P^2 H^2}\n+\\mathcal{O}\\left(\\varepsilon^2\\right).\n\\label{GijNoAlpha}\n\\end{equation}\nSurprizingly, the result is fully different if we assume a\nnonzero bare coupling constant $\\alpha$, because the matter field\nequation $\\dot\\varepsilon + 3 H \\varepsilon = \\alpha \\varepsilon\n\\dot\\varphi_c$ implies a different time evolution for\n$\\varepsilon$, and the time integration of Eq.~({\\ref{Eq21c})\nmakes $\\alpha$ go away. One finds that the lowest-order\nexpressions of $H$ and $\\dot\\varphi_c$ both get multiplied by the\nsame factor $1 - \\varepsilon\/12 M_P^2 H^2 +\n\\mathcal{O}(\\varepsilon^2)$, and the matter field equation can\nagain be used to compute $\\ddot\\varphi_c$. Now the term\nproportional to $\\Box\\varphi$ in Eq.~(\\ref{Eq16}) happens to give\na negative $\\mathcal{O}(\\varepsilon)$ contribution to\n$\\mathcal{G}^{ij}$, but it is counterbalanced by the\n$\\left(\\partial_\\lambda\\varphi\\right)^4$ and\n$\\nabla^\\mu\\partial^\\nu\\varphi$ terms of (\\ref{Eq16}).\nWe finally get\n\\begin{equation}\n\\mathcal{G}^{ij} = \\frac{1}{18}\\, |k_2|\\, g^{ij}\n\\frac{\\varepsilon}{M_P^2 H^2}\n\\left(1+ \\text{sign}(k_3) \\alpha \\sqrt{\\frac{3}{|k_2|}}\\right)\n+\\mathcal{O}\\left(\\varepsilon^2\\right),\n\\label{Gij}\n\\end{equation}\nwhich is positive definite if $3\\alpha^2\/|k_2| < 1$ (and for even\nlarger $|\\alpha|$ if $k_3 \\alpha > 0$). The reason why\n(\\ref{Gij}) does not tend to (\\ref{GijNoAlpha}) when $\\alpha\n\\rightarrow 0$ is because they correspond to different\ncosmological initial conditions. In any case, their positivity\nshows that the ill-posed Cauchy problem found above was the\nconsequence of an oversimplified cosmological background, and\nthe signature of the effective metric at large distances is in fact\n$-+++$, as needed.\n\nTo conclude, the curved-spacetime Galileon model (\\ref{Eq1}) with\n$k_2 < 0$ admits a self-accelerating cosmological solution, and a\nspherical body embedded in this Universe generates a scalar field\ngiven by the \\textit{upper} sign of Eq.~(\\ref{Eq8}), i.e., by\n(\\ref{Eq31}), (\\ref{Eq32}) or (\\ref{Eq33}) depending on the\ndistance to this body ---~the Vainshtein regime (\\ref{Eq33})\nexisting only if $k_3 \\alpha_\\text{eff} > 0$. Perturbations\naround this solution carry positive energy and have a well-posed\nCauchy problem in the Vainshtein regime $r \\ll r_V$. Farther\naway, one must take into account the matter content of the\nUniverse to derive the cosmological expansion, and the effective\nmetric in which scalar perturbations propagate then remains of\nthe correct hyperbolic signature $-+++$.\n\nLet us finally mention in which conditions the local scalar\nfield's backreaction on the metric is indeed negligible, as was\nassumed to draw the above conclusions. First, this is always the\ncase if $\\alpha_\\text{eff}$ is small enough, i.e., if one tunes\nthe bare matter-scalar coupling constant to $\\alpha \\approx\n-\\text{sign}(k_3)\\sqrt{|k_2|\/3}$, so that it almost compensates\nthe cosmologically induced coupling $k_3\\dot\\varphi_c^2\/M^2$\nentering Eq.~(\\ref{Eq10}). Indeed, the spatial integral of\nEq.~(\\ref{Tmunu}) over a sphere of radius $r < 1\/H$ is at most of\norder $\\mathcal{O}\\left(\\alpha_\\text{eff}\\, r_S\\right)$,\nnegligible with respect to the effect of the material body of\nmass $\\frac{1}{2}\\,r_S$. However, for a random bare coupling\nconstant such that $\\alpha^2\/|k_2| \\alt 1$ (a small value being\nalso allowed), Eqs.~(\\ref{Eq10}) and (\\ref{Eq24}) imply that\n$\\alpha_\\text{eff}^2\/|k_2|$ is generically of the order of unity,\ntherefore one actually expects large deviations from our results\nabove. But even in such a case, the scalar field's backreaction\nis still negligible in the Vainshtein regime $r \\ll r_V$, because\nit is of order $\\mathcal{O}\\left(\\alpha_\\text{eff}\\, r_S\n(r\/r_V)^{3\/2}\\right)$. Therefore, our previous conclusion that\nthe Galileon model (\\ref{Eq1}) is stable in the Vainshtein regime\nremains valid even for a matter-scalar coupling of order one. On\nthe other hand, for such a $\\mathcal{O}(1)$ coupling, our test\nscalar field approximation cannot be trusted at distances $r \\agt\nr_V$, and further work is needed to actually prove\nstability~\\cite{BabichevEtAl}. Finally, for cosmologically large\ndistances $r \\sim 1\/H \\gg r_V$, both the effects of the massive\nbody and the locally generated scalar field are negligible, and\nwe asymptotically reach an effective metric $\\mathcal{G}^{\\mu\\nu}\n\\approx |k_2| \\text{diag}\\left(-2, \\mathcal{O}(1)\\, \\varepsilon\\,\ng^{ij}\/M_P^2 H^2 \\right)$, with the correct hyperbolic signature\n$-+++$.\n\n\\section{Scalar waves within matter}\n\\label{Sec7}\nAlthough we have shown in Secs.~\\ref{Sec5} and \\ref{Sec6} that\nthe field equations for scalar perturbations are hyperbolic in\nthe Vainshtein regime, and thereby that there is no ghost\ninstability at small enough distances from a massive body, the\ngrowth of such perturbations \\textit{within} the body deserves a\nmore careful study. First of all, the vacuum solution (\\ref{Eq8})\nfor the scalar field is no longer valid inside matter, therefore\nthe hyperbolicity of the effective metric (\\ref{Eq16}) needs to\nbe checked. To simplify, let us consider a spherical body of\nconstant density. As derived in page 331 of Ref.~\\cite{Weinberg},\nthe interior solution for the metric (\\ref{Eq5}) (neglecting\ncosmological corrections of order $H^2 r^2$) reads\n\\begin{subequations}\n\\label{Eq34}\n\\begin{eqnarray}\ne^{\\nu(r)} &=&\n\\frac{1}{4}\\left[3\\left(1-\\frac{r_S}{r_*}\\right)^{1\/2}\n-\\left(1-\\frac{r_S r^2}{r_*^3}\\right)^{1\/2}\\right]^2\n= 1 - \\left(3 - \\frac{r^2}{r_*^2}\\right) \\frac{r_S}{2r_*}\n+\\mathcal{O}\\left(r_S^2\\right),\n\\label{Eq34a}\\\\\ne^{\\lambda(r)} &=& \\left(1-\\frac{r_S r^2}{r_*^3}\\right)^{-1}\n= 1 + \\frac{r_S r^2}{r_*^3}\n+\\mathcal{O}\\left(r_S^2\\right),\n\\label{Eq34b}\n\\end{eqnarray}\n\\end{subequations}\nwhere $r_*$ denotes the body's radius, not to be confused with\nits Schwarzschild radius $r_S$. The background scalar field\nequation (\\ref{Eq2}) can then be solved as in Sec.~\\ref{Sec3},\nwith the difference that only the mass interior to the sphere of\nradius $r$, $m(r) = (r_S\/2G) (r\/r_*)^3$, is a source for\n$\\varphi'(r)$. We get\n\\begin{equation}\n\\varphi'_\\text{interior} = -\\frac{k_2 M^2 r}{4 k_3}\n\\left(1\\pm\\sqrt{1-\\frac{8 H^2 k_3^2}{k_2^2 M^4}\\,\n\\dot\\varphi_c^2\n+\\frac{4 k_3 r_S}{k_2^2\nM^2 r_*^3}\\, \\alpha_\\text{eff}}\\right)\n+ \\mathcal{O}\\left(r_S^2\\right)\n+ \\mathcal{O}\\left(H^2 r^2\\right),\n\\label{Eq35}\n\\end{equation}\nwhere either $H = 0$ in asymptotic Minkowski spacetime, or $H$\nand $\\dot\\varphi_c$ are given by Eqs.~(\\ref{Eq23}) and\n(\\ref{Eq24}) in asymptotic de Sitter spacetime (yielding the\nconstant $-\\frac{8}{9}$ we already encountered in\nEq.~(\\ref{Eq31}) for the exterior solution). The only difference\nwith respect to the exterior solution (\\ref{Eq9}) or\n(\\ref{Eq31}), at this lowest post-Newtonian order, is that\n$\\alpha_\\text{eff}$ is multiplied by the constant $1\/r_*^3$\ninstead of $1\/r^3$. In the Vainshtein regime $r\\leq r_* \\ll r_V$,\nwe have thus\n\\begin{equation}\n\\varphi'_\\text{interior} = \\mp\\frac{k_2 M^2}{4 k_3}\n\\left(\\frac{r_V}{r_*}\\right)^{3\/2} r\n\\left[1+\\mathcal{O}\\left(\\frac{r_S}{r_*}\\right)\n+\\mathcal{O}\\left(\\frac{r_*^3}{r_V^3}\\right)\n+\\mathcal{O}\\left(H^2r_*^2\\right)\\right],\n\\label{Eq36}\n\\end{equation}\ninstead of Eq.~(\\ref{Eq12}), where the lower sign ($+$)\ncorresponds to asymptotic Minkowski spacetime, and the upper one\n($-$) to asymptotic de Sitter spacetime. The various\ncontributions to the effective metric (\\ref{Eq16}) can now be\ncomputed within the body, and one finds at lowest order (in both\nasymptotic Minkowski and de Sitter cases)\n\\begin{equation}\n\\mathcal{G}^{\\mu\\nu}_\\text{interior} \\approx\n|k_2| \\left(\\frac{r_V}{r_*}\\right)^{3\/2}\n\\text{diag}\\left(-\\frac{3}{2}, 1, \\frac{1}{r^2},\n\\frac{1}{r^2 \\sin^2\\theta}\\right),\n\\label{Eq37}\n\\end{equation}\ninstead of (\\ref{Eq20}) [or the opposite of of (\\ref{Eq20}) in\nthe asymptotic de Sitter case, for which $-k_2 = |k_2|$]. Note\nthat this effective metric is \\textit{discontinuous} at the\nsurface of the body, because it involves $\\varphi''$, which is\ndiscontinuous when one suddenly passes from empty space to a\nfinite matter density. But of course, everything becomes\ncontinuous when one considers a smooth matter profile. Note also\nthat the sound speed, $\\sqrt{2\/3}\\, c$, is now isotropic, and\ndiffers from its value outside matter (that we found in\nSec.~\\ref{Sec5} to be $2c\/\\sqrt{3}$ in the radial direction and\n$c\/\\sqrt{3}$ in the orthoradial ones). This can be interpreted as\ndifferent refractive indices of matter and vacuum for scalar\nwaves. But the crucial information brought by Eq.~(\\ref{Eq37}) is\nthat the effective metric remains of hyperbolic signature $-+++$\nwithin matter, therefore no ghost instability develops even in\nthe interior of a body.\n\nOn the other hand, the Ricci tensor entering Eq.~(\\ref{Eq4}) no\nlonger vanishes within matter, and this causes extra couplings of\nthe perturbations to the background fields, whose effects could\n\\textit{a priori} lead to other types of instabilities. This\nRicci tensor is actually generated both by matter and by the\nbackground scalar field itself. The scalar-scalar\nself-interactions are responsible for the $\\mathcal{O}(k_3^2)$\nterms in the effective metric (\\ref{Eq16}), as was first shown in\n\\cite{Deffayet:2010qz}. Not only we already took them into\naccount to derive (\\ref{Eq37}), but we also saw in\nSecs.~\\ref{Sec5} and \\ref{Sec6} that they are negligible with\nrespect to the $\\mathcal{O}(k_3)$ terms, provided\n$\\alpha^2\/|k_2|$ is at most of order 1 (i.e., that we are not\nconsidering a extremely large matter-scalar coupling constant).\nWe can thus focus on the Ricci tensor generated by matter,\n$R^{\\mu\\nu}_\\text{matter} = \\frac{3}{2}\\left(r_S\/r_*^3\\right)\ng^{\\mu\\nu} + \\mathcal{O}\\left(r_S^2\\right)$, which multiplies\n\\textit{first} derivatives of the scalar field in\nEq.~(\\ref{Eq4}), and thereby does not change its kinetic term\n(second derivatives). Nevertheless, its presence means that\nwithin matter, scalar perturbations acquire a direct derivative\ncoupling to the background scalar field. The first-order\nexpansion of (\\ref{Eq4}) reads\n\\begin{equation}\n\\Box\\varphi\\, \\Box\\pi\n- \\left(\\nabla^\\mu\\partial^\\nu\\varphi\\right)\n\\left(\\nabla_\\mu\\partial_\\nu\\pi\\right)\n-R^{\\mu\\nu}_\\text{matter}\\, \\varphi_{,\\mu}\\pi_{,\\nu} = 0,\n\\label{Eq38}\n\\end{equation}\nwhere we have neglected the $\\mathcal{O}(k_3^2)$ terms coming\n{}from the diagonalization (\\ref{Eq14})--(\\ref{Eq16}) of the\nkinetic terms, as well as the $\\left(k_2M^2\/2k_3\\right) \\Box\\pi$\nlinear term because we assume the star is in the Vainshtein\nregime ($r_* \\ll r_V$). At lowest post-Newtonian order, the\nradial contribution $-R^{rr}_\\text{matter}\\, \\varphi' \\pi'$ can\nalso be neglected, because both $R^{rr}_\\text{matter} =\n\\mathcal{O}\\left(r_S\\right)$ and $\\varphi' =\n\\mathcal{O}\\left(\\sqrt{r_S}\\right)$ tend to $0$ with $r_S$. But\nthe temporal contribution $-R^{00}_\\text{matter}\\,\n\\dot\\varphi_c\\, \\dot\\pi$ is crucial, as it corresponds to a\ndamping or antidamping term depending on its sign. Let us thus\nconsider the asymptotic de Sitter case of Sec.~\u00a0\\ref{Sec6}, where\n$\\dot\\varphi_c$ is given by Eq.~(\\ref{Eq24}). The main\ncontributions to Eq.~(\\ref{Eq38}) then read\n\\begin{eqnarray}\n\\nabla_\\mu\\left(\\mathcal{G}^{\\mu\\nu}_\\text{interior}\n\\partial_\\nu\\pi\\right)\n-\\frac{2 k_3}{M^2}\\,R^{00}_\\text{matter}\\,\n\\dot\\varphi_c\\, \\dot\\pi &=& 0\n\\nonumber\\\\\n\\Leftrightarrow \\quad\n-\\ddot\\pi+\\frac{2}{3}\\,\\Delta\\pi\n-\\left(\\frac{|k_2|}{3\\, \\alpha_\\text{eff}^2}\\right)^{1\/4}\n\\left(\\frac{r_S}{r_*^3}\\right)^{1\/2}\n\\dot\\pi &=& 0.\n\\label{Eq39}\n\\end{eqnarray}\nThe coefficient of the $\\dot\\pi$ term is generically of order\n$\\sqrt{r_S\/r_*^3}$, notably when the bare matter-scalar coupling\nconstant $\\alpha$ vanishes or is negligibly small. Indeed,\nEqs.~(\\ref{Eq10}) and (\\ref{Eq24}) then give $\\alpha_\\text{eff}^2\n\\approx |k_2|\/3$. Note that this coefficient can never be small\nwith respect to $\\sqrt{r_S\/r_*^3}$, because we assume that\n$\\alpha^2\/|k_2|$ is at most of order 1, therefore\n$\\alpha_\\text{eff}^2\/|k_2|$ is never large either. On the other\nhand, it is possible to choose the bare $\\alpha$ so that\n$\\alpha_\\text{eff}$ is small (see Sec.~\\ref{Sec6}), and the\n$\\dot\\pi$ term of Eq.~(\\ref{Eq39}) may thus be multiplied by a\nlarge number in some specific situations. Therefore, the extra\ncoupling of the perturbation $\\pi$ to the cosmologically-imposed\n$\\dot\\varphi_c$ \\textit{within} matter can change drastically its\nbehavior. The good news is that its sign implies this is a\nfriction term, and therefore that no scalar instability occurs\nwithin the material body. The plane-wave solutions of\nEq.~(\\ref{Eq39}) read indeed (in Cartesian coordinates)\n\\begin{equation}\n\\pi \\propto e^{-t\/T}\\sin\\left(\\omega t\n- \\mathbf{k}\\cdot\\mathbf{x}+\\text{const}\\right),\n\\label{Eq40}\n\\end{equation}\nwith a dispersion relation\n\\begin{equation}\n\\frac{2}{3}\\, \\mathbf{k}^2 = \\omega^2 + \\frac{1}{T^2},\n\\label{Eq41}\n\\end{equation}\nand a decay time\n\\begin{equation}\nT \\equiv\n2\\left(\\frac{3\\, \\alpha_\\text{eff}^2}{|k_2|}\\right)^{1\/4}\n\\left(\\frac{r_*^3}{r_S}\\right)^{1\/2}.\n\\label{Eq42}\n\\end{equation}\nFor the Sun, $2\\sqrt{r_*^3\/r_S}$ is about one hour, therefore\nthis damping is always quite efficient. But when\n$\\alpha_\\text{eff}$ happens to be small, because of a balance\nbetween the bare matter-scalar coupling constant $\\alpha$ the\ncosmologically-induced one $k_3\\dot\\varphi_c^2\/M^2$ in\nEq.~(\\ref{Eq10}), scalar perturbations are suppressed even\nfaster. For $\\omega = 0$ and $\\frac{2}{3}\\,\\mathbf{k}^2\u00a0\\leq\n1\/T^2$, there exist other solutions of the form $\\pi \\propto\ne^{-t\/T_\\pm}\\sin\\left(\\mathbf{k}\\cdot\\mathbf{x}\n+\\text{const}\\right)$, where $1\/T_\\pm \\equiv 1\/T \\pm \\sqrt{1\/T^2\n- 2\\mathbf{k}^2\/3}$ is always positive, therefore such\nperturbations are damped too. The upper sign gives an even faster\ndecay than for the plane waves (\\ref{Eq40}), whereas the lower\nsign corresponds to a slower damping. The limiting case of a\nhomogeneous perturbation ($\\mathbf{k} = \\mathbf{0}$) gives either\na fast damping $\\pi \\propto e^{-2t\/T}$, or an actual\nconstant which can be reabsorbed in the definition of the\nbackground $\\varphi_0$.\n\nIn conclusion, the physics of scalar perturbations is fully safe\nin the interior of a spherical body in the Vainshtein regime. Not\nonly the Cauchy problem is well posed and the kinetic energy\ncarried by scalar perturbations is positive, but even the subtle\nderivative coupling caused by the nonvanishing Ricci tensor in\nEq.~(\\ref{Eq38}) efficiently \\textit{damps} these perturbations.\nNote that this local friction within material bodies is\nparadoxically caused by the slow cosmological time evolution\n($\\dot\\varphi_c \\neq 0$) of the background scalar field. The\nlarge-distance behavior of the Universe is thus significantly\ninfluencing local physics.\n\n\\section{Conclusions}\n\\label{Concl}\n\nIn this paper we studied in detail spherically symmetric\nsolutions for the cubic covariant Galileon model (\\ref{Eq1}) in\npresence of a matter source. An important ingredient of our study\nis that we take into account the time variation of the scalar\nfield, induced by the cosmological evolution. Although\ncosmological time evolution is tiny (normally of order of the\nHubble rate, $\\dot\\varphi_c \\sim H$), nevertheless its effect on\nthe behavior of the scalar is crucial and cannot be disregarded.\n\nWe exhibited, in particular, an {\\it induced} matter-scalar\ncoupling for $\\dot\\varphi_c \\neq0$, Eq.~(\\ref{Eq10}). This effect\narises because of the kinetic mixing of the Galileon and the\ngraviton, which results in the coupling of the derivatives of the\nscalar field to the curvature, in the Galileon field equation.\nFor $\\dot\\varphi_c \\neq0$, this term plays the role of a\nmatter-scalar coupling, hence it is {\\it induced} by the\ncosmological evolution of $\\varphi$. It is important to note that\nthis coupling is naturally of order of unity, therefore the\nGalileon effectively couples to matter sources even if the bare\nscalar-matter coupling is absent, $\\alpha =0$.\n\nWe also found that the local solution for the Galileon field\ncrucially depends on the asymptotic conditions at large\ndistances. Indeed, to find time-dependent solutions in presence\nof massive sources, we applied the ansatz (\\ref{Eq6}), which\nenables to separate the time and radial variables. The\nradial-dependent part can then be found exactly, in the test\nscalar field approximation, for an arbitrary static background\nmetric, Eq.~(\\ref{Eq8}). [We checked \\textit{a posteriori} the\nconditions for backreaction to be negligible.] The full solution\nfor $\\varphi'$, however, consists of two different branches. The\nsolution with the {\\it lower} sign in (\\ref{Eq8}) corresponds to\nthe asymptotically Minkowski spacetime, with no time evolution of\n$\\varphi$. The other case we studied in detail is the\nasymptotically de Sitter Universe, whose self-acceleration is\nprovided by a time evolution of the Galileon itself. In this\ncase, the cosmological asymptotic behavior dictates to choose the\nother branch of the solution, namely, Eq.~(\\ref{Eq8}) with the\n{\\it upper} sign. It turns out that the stability of the\nsolutions drastically depends of the choice of the branch.\n\nIn order to investigate stability of the found solutions, notably\nthe well-posedness of the Cauchy problem and the positivity of\nenergy, we identified the actual spin-0 and spin-2 degrees of\nfreedom by diagonalizing their kinetic terms in Sec.~\\ref{Sec4}.\nUsing the results of Sec.~\\ref{Sec4}, we found in Sec.~\\ref{Sec5}\nthat the background scalar field given by solution (\\ref{Eq8}),\n(\\ref{Eq9}) or (\\ref{Eq12}) with their \\textit{lower} sign\n(corresponding to the asymptotically Minkowski spacetime), is\nconsistent both in the Vainshtein regime ($r\\ll r_V$) and at\nlarge distances, provided $k_2 >0$ and $k_3\\alpha \\geq 0$.\n\nThe asymptotically de Sitter case has been treated separately in\nSec.~\\ref{Sec6}. It should be noted that the self-accelerating de\nSitter stage is only possible for the choice $k_2 <0$ in action\n(\\ref{Eq1}). Therefore naively one would expect appearance of a\nghost instability at least in the Vainshtein regime, see e.g.\nEq.~(\\ref{Eq20}), where a negative $k_2$ means ghost scalar\nperturbation. One should also have in mind that the existence of\nthe Vainshtein regime is generic, even with no direct\nscalar-matter coupling, because of the {\\it induced} coupling for\ntime-dependent solutions. It turns out, however, that the choice\nof the correct branch, corresponding to the asymptotically de\nSitter solution, namely, Eq.~(\\ref{Eq8}) with the {\\it upper}\nsign (in contrast to the asymptotically Minkowski where the {\\it\nlower} sign is taken), changes drastically the behavior of\nperturbations. Indeed, we found that the solution is stable for\nthe Vainshtein regime, contrary to naive expectations.\n\nThe large distance behavior, $r\\gg r_V$, for the asymptotic de\nSitter is more subtle. The pure cosmological perturbations of the\nspin-0 degree of freedom have positive-energy dust-like behavior\n(i.e., with vanishing sound speed). Therefore any small deviation\ncan potentially create a change in signature for perturbations.\nIn fact, the deviations caused by a local material body do not\nyield a correct signature: The field equations for perturbations\nbecome elliptic in the angular directions. However, this problem\nexists only in the pure de Sitter universe filled with the\nGalileon field with no external matter. In a more realistic\nscenario, when a small amount of normal matter is added (such\nthat the universe is almost de Sitter), the effective metric\n(\\ref{Eq16}) always has the correct signature.\n\nAnother interesting effect, associated with the cosmological time\nevolution of the Galileon, is the appearance of a significant\nfriction term in the equation of motion. Because of the\ncommutation of covariant derivatives, the Galileon field\nequations involve couplings of the scalar field to the curvature\ntensor. In the cubic model (\\ref{Eq1}) studied in this paper,\nthis generates \\textit{within} matter an extra interaction of\nscalar perturbations with their background. We showed that it\ncauses an efficient damping of these perturbations, a local\neffect paradoxically caused by the cosmological time evolution.\n\nTo summarize, we studied spherically symmetric solutions of the\ncubic covariant Galileon model in presence of a matter source and\na cosmological evolution. We found that the time evolution of the\nGalileon, even as small as a cosmological one, leads to\nconsiderable effects: in particular, the appearance of an {\\it\ninduced} coupling, which is generically of the order of unity, so\nthat the Galileon becomes effectively coupled to local matter\nsources even if the bare coupling is zero; and the emergence of a\nfriction term, which effectively {\\it damps} perturbations within\nmatter sources. The detailed analysis of perturbations showed\nthat the Galileon model is well behaved in asymptotically\nMinkowski space as well as in the asymptotically de Sitter,\nprovided that the correct branch of the solution is chosen.\n\n\\section*{Acknowledgments}\nThe work of E.B. was supported in part by grant FQXi-MGA-1209\n{}from the Foundational Questions Institute. The work of G.E.-F.\nwas in part supported by the ANR grant THALES.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{introduction}\n\nWe present current methods for identifying causal effects under\n``interference'', a term which covers a very broad class of situations\nin which a unit's outcomes depend not only on treatments received by\nthat unit, but also on treatments received by other units \\citep{cox58}.\nThis includes effects that spill over from one unit to others. For\nexample, in an agricultural experiment, fertilizer applied to one plot\nof land may literally spill over into other plots assigned to different\ntreatments, therefore affecting their yields. Interference may arise\nfrom interactions between units, such as through social influence\nprocesses. For example, in the context of an election, exposing a voter\nto a persuasive appeal may affect what that voter says to their friends,\nwhich in turn may affect the friends' outcomes.\n\nInterference represents a departure from the traditional assumption\nwherein the potential outcomes that would be observed for a unit in\neither the treatment or control condition, depend only on that unit's,\nand not the overall, treatment assignment. This traditional assumption\nis implied by what \\citet{rubin1990} refers to as the ``stable unit\ntreatment value assumption'' (SUTVA).\n\nFigure \\ref{fig:interference_dag} displays channels through which\ninterference might occur. The black elements in Figure\n\\ref{fig:interference_dag} show a directed acyclic graph that captures\npotential spillover effects onto unit 2 from a treatment assigned to\nunit 1. We assume an experiment where unit 1's treatment, \\(Z_1\\), is\nrandomly assigned. Then, the effect of this treatment, captured by the\nblack arrows flowing from \\(Z_1\\), could be to alter unit 1's own\noutcome, \\(Y_1\\), as well as unit 2's outcome, \\(Y_2\\). This could\nhappen via a pathway in which \\(Y_1\\) mediates the effect of \\(Z_1\\) on\n\\(Y_2\\). Such outcome-mediated effects are known as \\emph{contagion}\neffects, which we briefly discuss toward the end of this chapter. Or it\ncould be that \\(Z_1\\) affects \\(Y_2\\) through channels that do not go\nthrough \\(Y_1\\). Spillover includes the sum total of the effects from\n\\(Z_1\\) to \\(Y_2\\). The gray elements in Figure\n\\ref{fig:interference_dag} show that spillovers could be running from\n\\(Z_2\\) to \\(Y_1\\) as well. Finally, the variable \\(U\\) captures other\nvariables that might induce dependency between \\(Y_1\\) and \\(Y_2\\). It\nis important to recognize that these sources of outcome dependence or\nclustering are wholly distinct from spillover. However, such confounders\nundermine the ability to isolate contagion effects from other spillover\nmechanisms, a point to which we return below.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.60000\\textwidth]{.\/interferenceDAG-2.pdf}\n\\caption{Causal graph illustrating interference mechanisms and\nconfounding mechanisms when treatment (\\(Z\\)) is randomized.\n\\label{fig:interference_dag}}\n\\end{figure}\n\nIn this chapter, we assume that the researcher is interested in\nestimating spillover effects. We focus on randomized experiments for\nwhich we have some understanding of the structure through which\nspillover effects occur. In the first section, we review cases where the\ninterference network is known completely, but then can take almost\narbitrary form. In the second section, we review cases where we know\nonly that interference is fully contained within the boundaries of\nstrata that partition the population, but then the interference network\nwithin these strata is unknown. The final section makes some points\nregarding the attempt to distinguish contagion effects from other forms\nof spillover. We do not discuss work that examine causal effects in\nsituations where the interference network is fully hidden as in\n\\citet{savje2017average}. Moreover, our emphasis is on large-N\nestimation of spillover effects, and so we omit discussion of methods\nthat work off the experimental randomization to develop exact tests for\ninterference effects\n\\citep{rosenbaum07_interference, aronow2012general, bowers2013reasoning, athey2018exact}.\nThese methods either restrict themselves to testing for interference, or\nrequire strong assumptions (for example, a finite-dimensional model of\ncausal effects) to attain interval estimates.\n\n\\section{Three Motivating Examples}\\label{three-motivating-examples}\n\nWe begin with three examples that allow us to illustrate key points. The\nfirst example is a study by \\citet{paluck2016changing}, who study the\neffects of an antibullying program in New Jersey schools. The authors\nbegan by measuring the schools' social networks. They did this by asking\nstudents to report which other students they chose to spend time with in\nprevious weeks. The experiment then randomly assigned schools to the\nantibullying program, and within schools, randomly selected students\nfrom an eligible subpopulation to actively participate in the program.\nAssuming the network measures are accurate, the experiment identifies\nspillover effects onto students who themselves do not participate in the\nprogram but have peers who do. The ability to get at such spillover\neffects depends on the accuracy of the network measure and the ways that\none specifies potential exposure to spillovers on the basis of this\nnetwork. If the researcher assumes that only peers of program recipients\ncan be affected by the anticonflict intervention, but in fact peers of\npeers can be affected as well, then inferences about the program's\ndirect and spillover effects may be biased. The section below on\narbitrary interference networks discuss this issue along with associated\nsensitivity analyses.\n\nAs a second example, consider Figure \\ref{fig:negative_spillover}, which\nshows the results of an experimental study in Kenya by\n\\citet{haushofer2018long} on the long-term (after 3 years) effects of\nunconditional cash transfers. The outcome here is monthly household\nconsumption. In this study, villages were randomly selected to be\ntreated, and then within these villages, a half of the households were\nrandomly selected to receive about 400 USD in cash. As Figure\n\\ref{fig:negative_spillover} shows, average monthly consumption in\ntreatment villages, pooling recipient and non-recipient households, is\n\\$211 (green horizontal line), which is similar to the average in\ncontrol villages (\\$217). But the inter-village comparison masks\nvariation within treated villages. Recipient households consume \\$235\nper month on average, while non-recipient neighbors in treatment\nvillages consume \\$188 per month on average. Given that the treatment\nwas assigned in a manner that randomized both across villages and within\nvillages, these three types of households (recipients, neighbors, and\ncontrol villages) are \\emph{ex ante} exchangeable, in which case the\nexperiment yields an unbiased estimate of a negative within-village\nspillover effect.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.50000\\textwidth]{.\/sandefur_cash_transfers_fig_1.png}\n\\caption{Figure from \\citet{sandefur2018bloomberg} displaying long-term\nspillover effects from an unconditional cash transfer program, as\nreported in \\citet{haushofer2018long}. \\label{fig:negative_spillover}}\n\\end{figure}\n\nThis example illustrates a few additional nuances. First, when\nspillovers are present, one needs to think about effects in terms of\noverall assignment patterns. Neither the comparison between directly\ntreated households and control village households nor directly treated\nhouseholds and untreated households in treated villages gets at ``the''\neffect of cash transfers. Effects depend on exactly whom and at what\nrate potentially interacting households are treated. This particular\nexperiment gives evidence on outcomes for treated households and\nuntreated neighboring households when the within-village treatment rate\nis 50\\%. Were the quantity of policy interest to be how outcomes change\ngoing from 0\\% to 100\\% treatment rates, for example, it is not clear\nthat the experiment could directly speak to that. Second, households\npresumably differ not only in how they would respond after receiving the\ntransfer per se, but also in how they would respond given the precise\nset of other households that receive the transfer. Suppose the village\nincluded two business partners, and business production exhibited\nincreasing returns to input capital. Then, outcomes would presumably\ndiffer if the households of both partners were treated as compared to\nonly one partner being treated. Outcomes among the untreated neighbors\nin treatment villages depend not only on the treatment saturation rate\n(50\\%), but also the precise configuration of treated and untreated\nhouseholds. This raises the question of how to interpret effects such as\nthose presented in Figure \\ref{fig:negative_spillover}---what\ncounterfactual comparisons are being characterized, exactly? The section\nbelow on partial interference addresses these issues.\n\nA third example is the experiment by \\citet{nickerson2008voting} on\npotential contagion in voter turnout. Households with two registered\nvoters were first randomly assigned to one of three conditions: a\nget-out-the-vote (GOTV) doorstep appeal, a doorstep appeal to promote\nrecycling, or a control condition in which nothing was done to the\nhousehold. The recycling appeal was meant to serve as a ``placebo''\ntreatment to account for the fact that only some households, and\nparticularly individuals with specific characteristics would open the\ndoor to receive an appeal. The experiment thus yielded data on subjects\nwho opened the door and were thus direct recipients of either the GOTV\nor recycling appeals, their housemates who were not there at the door,\nand then the full set of control households. Comparing voter turnout\namong housemates of those who directly received the GOTV appeal to\nhousemates of those who received the recycling appeal, one can estimate\nwhether the GOTV treatment spilled over from the direct recipient to the\nhousemate. Insofar as there is an effect, one may wonder if the\nmechanism at work is contagion---that is, it is the voting intention of\nthe direct recipient that went on to affect the voting intention of the\nhousemate---or some other mechanism. This distinction between mechanisms\nwould have implications for theories about norms that support voting\nbehavior. In the section below on contagion, we return to this\nexperiment and review assumptions needed to isolate contagion effects.\n\n\\section{Formal Setting}\\label{formal-setting}\n\nWe now present a formal framework for defining causal effects under\ninterference. Suppose an experimenter intervenes on a finite population\n\\(U\\) of units indexed by \\(i = 1, \\ldots, N\\). Let us suppose further\nthat the intervention is defined by a treatment assignment vector\n\\(\\mathbf{z} = (z_1, \\ldots, z_N)'\\), where \\(z_i \\in \\{1,0\\}\\) specifies the\npossible treatment values that unit \\(i\\) receives. Let \\(\\Omega\\) be\nthe set of treatment assignment vectors \\(\\mathbf{z}\\) with\n\\(\\left\\vert{\\Omega}\\right\\vert = 2^N\\). An \\emph{experimental design}\nis a plan for randomly selecting a particular value of \\(\\mathbf{z}\\) from the\n\\(2^N\\) different possibilities with predetermined probability\n\\(p_\\mathbf{z}\\)---for example, Bernoulli assignment (i.e., coin flips) or\ncompletely randomized assignment strategy. Therefore,\n\\(\\Omega = \\{\\mathbf{z} : p_\\mathbf{z} >0 \\}\\) and the realized treatment assignment\n\\(\\mathbf{Z} = (Z_1, \\ldots, Z_N)'\\) is a random vector with support \\(\\Omega\\)\nand \\(\\Pr(\\mathbf{Z} = \\mathbf{z}) = p_\\mathbf{z}\\). For example, with a population of size\n\\(N=10\\), and an experimental design that randomly assigns without\nreplacement a proportion \\(p=0.2\\) to treatment condition \\(z_i =1\\)\nwith uniform probability, there are \\({N\\choose pN}=45\\) possible\ntreatment assignments (\\(\\left\\vert{\\Omega}\\right\\vert = 45\\)) and the\nrealized treatment assignment \\(\\mathbf{Z}\\) has \\(p_\\mathbf{z} = \\frac{1}{45}\\). The\nexperimental design characterizes precisely the probability distribution\nof the assigned treatments. In experiments, this is determined by the\nresearcher and is therefore known.\n\nTo analyze the effect of different treatment assignments, we compare the\ndifferent outcomes they produce. These potential outcomes are defined\nfor each unit \\(i\\) as the elements in the image of a function that maps\nassignment vectors to a real valued outcomes,\n\\(y_i : \\Omega \\rightarrow \\mathbb{R}\\). Particularly, \\(y_i(\\mathbf{z})\\) is the\nresponse of unit \\(i\\) to assignment \\(\\mathbf{z}\\). For convenience, let\n\\(\\mathbf{z}_{-i}=(z_i, \\ldots, z_{i-1},z_{i+1}, \\ldots, z_N)'\\) denote the\n\\((N-1)\\)-element vector that removes the \\(i\\)th element from \\(\\mathbf{z}\\).\nThen, the potential outcome \\(y_i(\\mathbf{z})\\) can equivalently be expressed as\n\\(y_i(z_i; \\mathbf{z}_{-i})\\). Continuing with the example of the cash transfer\nprogram above, this quantity would be the potential consumption of\nhousehold \\(i\\) given its assignment as a transfer recipient or\nnon-recipient (\\(z_i\\)) and the treatment assignment of all other\nhouseholds (\\(\\mathbf{z}_{-i}\\)), including those inside and outside household\n\\(i\\)'s village.\n\nTraditional analyses of experiments, and other chapters in this volume,\nassume no interference, in which case the potential outcome \\(y_i(\\mathbf{z})\\)\nis restricted to be affected only by \\(i\\)'s own treatment. That is,\nwith no interference, for any two treatment assignments \\(\\mathbf{z}\\) and\n\\(\\mathbf{z}'\\), for which \\(z_i\\) remains unchanged, we have\n\\(y_i(z_i; \\mathbf{z}_{-i}) = y_i(z_i; \\mathbf{z}_{-i}')\\) for all\n\\(i \\in \\{1, \\ldots, N\\}\\). When interference is present, there exist\nsome units \\(i \\in U\\) for which\n\\(\\ y_i(z_i; \\mathbf{z}_{-i}) \\neq y_i(z_i; \\mathbf{z}_{-i}')\\), that is fixing the\ntreatment of \\(i\\) while changing other units' treatment results in\nchanges of \\(i\\)'s outcome.\n\nLet \\(Y_i\\) denote the observed outcome of unit \\(i\\), where the\nobserved outcome is related to the potential outcomes as\n\\(Y_i = y_i(\\mathbf{Z}) = y_i(Z_i;\\mathbf{Z}_{-i})\\), where \\(\\mathbf{Z}_{-i}\\) denotes the\nvector \\(\\mathbf{Z}\\) net of its \\(i\\)th element. In the case of no\ninterference, \\(Y_i = y_i(Z_i)\\). Therefore, when interference is\npresent, we need to account for others' treatment assignments as well.\n\n\\section{Arbitrary But Known Interference\nNetworks}\\label{arbitrary-but-known-interference-networks}\n\nThis section reviews estimation methods in a setting where interference\noccurs over a network of arbitrary structure, but this structure is\nknown. The analysis follows \\citet{aronow2017estimating}. We represent a\nunit's set of interfering units in terms of network ties. Then,\ndepending on the network structure and the treated units' network\ncharacteristics, different treatment assignments may result in different\nand arbitrary, but known, patterns of interference. For example,\nassuming that interference happens through direct ties between units,\ntreating any one unit in a fully-connected network generates a pattern\nin which the treatment of that one unit interferes with the treatment of\nevery other unit in the network. In a regular lattice, the treatment of\nany one treated unit interferes only with the treatment of that unit's\nfour nearest neighbors, and in an irregular network, treatment\nassignments that treat units with many direct ties generate more\ninterference, than assignments that treat units with just a few ties.\n\nAs in the anticonflict social network experiment of\n\\citet{paluck2016changing}, these methods require the researcher to\nmeasure the network or to have comprehensive information about\nconnections between experimental units, and to define precise causal\neffects which reflect the possible types of treatment \\emph{exposures}\nthat might be induced in the experiment, which in turn requires to make\nspecific assumptions about the extent of interference. The goal is to\nestimate exposure-specific causal effects---for the anticonflict\nprogram, for example, we might estimate effects on students for whom at\nleast one peer is a direct program participant, or for whom exactly two\npeers are participants, etc. Knowing the treatment assignment\ndistribution allows one to account for potential sources of confounding\nthat arise from heterogeneity across units in their likelihood of\nfalling into different exposure conditions (for example, heterogeneity\nin terms of students' number of connections with other students). The\nsections below explain.\n\n\\subsection{Exposure Mapping}\\label{exposure-mapping}\n\nTo determine each unit's treatment exposure under a given treatment\nassignment, \\citet{aronow2017estimating} define an \\emph{exposure\nmapping} that maps the set of assignment vectors and unit-specific\ntraits to an exposure value:\n\\(f : \\Omega \\times \\Theta \\rightarrow \\Delta\\), where\n\\(\\theta_i \\in \\Theta\\) quantifies relevant traits of unit \\(i\\) such as\nthe number of direct ties to other units in the network and, possibly,\nweights assigned to each of these ties. The set \\(\\Delta\\) contains all\nof the possible treatment-induced exposures that may be generated in the\nexperiment, and its cardinality depends on the nature of interference.\nFor example, with no interference and a binary treatment the exposure\nmapping ignores unit specific traits \\(f(\\mathbf{z}, \\theta_i) = z_i\\),\nproducing two possible exposure values for each unit: no exposure (or\ncontrol condition, \\(z_i=0\\)) and direct exposure (or treatment\ncondition, \\(z_i=1\\)), in which case \\(\\Delta = \\{0, 1\\}\\). Now,\nconsider interference that occurs through direct peer connections. Then,\n\\(\\theta_i\\) is a column vector equal to the transpose of unit \\(i\\)'s\nrow in a network adjacency matrix (which captures \\(i\\)'s direct\nconnections to other units), and the exposure mapping\n\\(f(\\mathbf{z}, \\theta_i)\\) can be simply defined to capture \\emph{direct}\nexposure to treatment---or the effect of being assigned to\ntreatment---and \\emph{indirect} exposure---or the effect of being\nexposed to treatment of peers.\\protect\\rmarkdownfootnote{Note that the meaning of\n ``direct'' and ``indirect'' in the interference setting is different\n than in the mediation setting reviewed in Glynn's chapter in this\n volume.} An example of such an exposure mapping (and by no means the\nonly possibility) is the following, whereby indirect exposure occurs\nwhen at least one peer is treated:\n\n\\[f(\\mathbf{z}, \\theta_i) = \\begin{cases} d_{11} (\\text{Direct + Indirect Exposure}): & z_i\\mathbf{I}(\\mathbf{z}'\\theta_i>0) =1, \\\\\nd_{10} (\\text{Isolated Direct Exposure}): & z_i\\mathbf{I}(\\mathbf{z}'\\theta_i=0) =1, \\\\\nd_{01} (\\text{Indirect Exposure}): & (1-z_i)\\mathbf{I}(\\mathbf{z}'\\theta_i>0) =1,\\\\\nd_{00} (\\text{No Exposure}): & (1-z_i)\\mathbf{I}(\\mathbf{z}'\\theta_i=0) =1 \\end{cases}\\]\n\n\\noindent For this particular case\n\\(\\Delta = \\{d_{11}, d_{10}, d_{01}, d_{00}\\}\\). This characterization\nof exposures is ``reduced form'' in that it does not distinguish between\nthe mechanisms through which spillover effects occur.\n\nSpecification of the exposure mapping requires substantive consideration\nof the data generating process. \\citet{manski2013identification}\ndiscusses subtleties that arise in specifying exposure mappings. For\nexample, the author shows how models of simultaneous endogenous choice\n(due to homophily or common external shocks) can produce restrictions on\nthe potential outcomes \\(y_i(d_k)\\), and therefore imply that potential\noutcomes may vary in ways that an otherwise intuitive exposure mapping\nmay fail to capture.\n\nBecause units occupy different positions in the interference network,\ntheir probabilities of being in one or another exposure condition vary,\neven if treatment is randomly assigned. Insofar as network position also\naffects outcomes, then such differences in exposure probabilities need\nto be taken into account when estimating exposure-specific causal\neffects. Otherwise, the analysis would be confounded. We show here that\nwhen the random assignment mechanism is known, then these exposure\nprobabilities are also known. This allows one to condition on the\nexposure probabilities directly. To see this, define the exposure that\nunit \\(i\\) receives as \\(D_i = f(\\mathbf{Z}, \\theta_i)\\), a random variable with\nsupport \\(\\Delta_i \\subseteq \\Delta\\) and for which\n\\(\\Pr(D_i = d) = \\pi_i(d)\\). For each unit \\(i\\) there is a vector,\n\\(\\boldsymbol{\\pi}_i = (\\pi_i(d_1), \\ldots, \\pi_i(d_K))'\\), with the probability of\n\\(i\\) being subject to each of the possible exposures in\n\\(\\{d_1, \\ldots, d_K\\}\\). \\citet{aronow2017estimating} call \\(\\boldsymbol{\\pi}_i\\)\n\\(i\\)'s \\emph{generalized probability of exposure}. For example, the\nexposure mapping defined above gives rise to\n\\(\\boldsymbol{\\pi}_i = (\\pi_i(d_{11}), \\pi_i(d_{10}), \\pi_i(d_{01}), \\pi_i(d_{00}))'\\).\nWe observe the unit-specific traits (\\(\\theta_i\\)) necessary to define\nexposures for any treatment assignment vector, and the probability of\neach possible treatment assignment vector (\\(p_{\\mathbf{z}}\\)) is known. This\nallows us to compute \\(\\pi_i(d_k)\\) as the expected proportion of\ntreatment assignments which induce exposure \\(d_k\\) for unit \\(i\\). When\nthe set of possible treatment assignment vectors \\(\\Omega\\) is small,\nthis can be computed exactly. When \\(\\Omega\\) is large, one can\napproximate the \\(\\pi_i(d_k)\\) values with arbitrary precision by taking\na large number of random draws from \\(\\Omega\\).\n\\citet{aronow2017estimating} discuss considerations for how many draws\nare needed so as to keep biases small. This Monte Carlo method may in\nsome cases require a prohibitive number of draws (for example, if\n\\(|\\Delta|\\) is large), but for some specific designs and exposure\nmappings it may be possible to compute the \\(\\pi_i(d_k)\\) values via a\ndynamic program, as in \\citet{ugander2013graph}.\n\nThe following toy example illustrates how to compute the exposure\nreceived by each unit and the generalized probability of exposure using\nthe \\texttt{interference} package for R \\citep{zonszein-interference}.\nSuppose we have a set of \\(N=10\\) units, we randomly assign (without\nreplacement) a proportion \\(p=0.2\\) to treatment condition \\(z_i=1\\)\nwith uniform probability. In this case, the realized treatment\nassignment \\(\\mathbf{Z}'\\) shows that units 6 and 9 are directly treated.\n\n\\begin{Shaded}\n\\begin{Highlighting}[]\n\\NormalTok{N <-}\\StringTok{ }\\DecValTok{10}\n\\NormalTok{p <-}\\StringTok{ }\\FloatTok{0.2}\n\\NormalTok{Z <-}\\StringTok{ }\\KeywordTok{make_tr_vec_permutation}\\NormalTok{(N, p, }\\DataTypeTok{R =} \\DecValTok{1}\\NormalTok{, }\\DataTypeTok{seed =} \\DecValTok{56}\\NormalTok{)}\n\\NormalTok{Z}\n\\end{Highlighting}\n\\end{Shaded}\n\n\\begin{verbatim}\n [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]\n[1,] 0 0 0 0 0 1 0 0 1 0\n\\end{verbatim}\n\nNow let's suppose that units are connected according to a draw from a\nWatts-Strogatz model---a random graph generation model that produces\nnetworks with ``small world'' properties: high clustering in network\ninterconnections and short average path lengths that connect any two\narbitrary nodes (units). We assume an undirected network, that is, if\n\\(i\\) has a direct connection with \\(j\\), then \\(j\\) has one with \\(i\\),\nand that on average each unit is directly connected to four other units.\nA visualization of such a network is given in Figure\n\\ref{fig:small-network}, and its adjacency matrix is:\n\n\\begin{Shaded}\n\\begin{Highlighting}[]\n\\NormalTok{adj_matrix <-}\\StringTok{ }\\KeywordTok{make_adj_matrix}\\NormalTok{(N, }\\DataTypeTok{model =} \\StringTok{'small_world'}\\NormalTok{, }\\DataTypeTok{seed =} \\DecValTok{492}\\NormalTok{)}\n\\NormalTok{adj_matrix}\n\\end{Highlighting}\n\\end{Shaded}\n\n\\begin{verbatim}\n [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]\n [1,] 0 1 1 0 1 0 0 0 0 0\n [2,] 1 0 1 1 0 0 0 0 0 1\n [3,] 1 1 0 1 1 1 0 0 1 0\n [4,] 0 1 1 0 1 1 1 0 0 0\n [5,] 1 0 1 1 0 1 0 1 0 0\n [6,] 0 0 1 1 1 0 0 0 0 0\n [7,] 0 0 0 1 0 0 0 1 1 0\n [8,] 0 0 0 0 1 0 1 0 1 1\n [9,] 0 0 1 0 0 0 1 1 0 1\n[10,] 0 1 0 0 0 0 0 1 1 0\n\\end{verbatim}\n\n\\begin{figure}\n\n{\\centering \\includegraphics{.\/unnamed-chunk-4-1} \n\n}\n\n\\caption{\\label{fig:small-network} Example of interference network with ten units. Each edge (link) represents a possible channel through which spillover effects might transmit.}\\label{fig:unnamed-chunk-4}\n\\end{figure}\n\nFor the purposes of our example, the adjacency matrix captures\n\\(\\Theta\\), while each row defines \\(\\theta_i'\\). The reason is that in\nour example, exposures are defined strictly through the combination of\nthe treatment assignment and the individual rows of the adjacency\nmatrix. In principle, exposure mappings could take other factors into\naccount, such as covariates that are not related to the adjacency matrix\nor other properties of the adjacency matrix besides a unit's row.\nReturning to the example, from \\(\\theta_6'\\) we know that unit 6 has\nedges to each of units 3, 4 and 5. Using the adjacency matrix\n(\\texttt{adj\\_matrix}) and \\(\\mathbf{Z}\\) (\\texttt{Z}) as arguments in the\nexposure mapping function defined above, we obtain the received exposure\nfor every unit. The argument \\texttt{hop\\ =\\ 1} describes a data\ngenerating process in which indirect exposure happens through the\nexistence of any direct peer receiving treatment:\n\n\\begin{Shaded}\n\\begin{Highlighting}[]\n\\NormalTok{D <-}\\StringTok{ }\\KeywordTok{make_exposure_map_AS}\\NormalTok{(adj_matrix, Z, }\\DataTypeTok{hop =} \\DecValTok{1}\\NormalTok{)}\n\\NormalTok{D}\n\\end{Highlighting}\n\\end{Shaded}\n\n\\begin{verbatim}\n dir_ind1 isol_dir ind1 no\n [1,] 0 0 0 1\n [2,] 0 0 0 1\n [3,] 0 0 1 0\n [4,] 0 0 1 0\n [5,] 0 0 1 0\n [6,] 0 1 0 0\n [7,] 0 0 1 0\n [8,] 0 0 1 0\n [9,] 0 1 0 0\n[10,] 0 0 1 0\n\\end{verbatim}\n\nWe can see that the received exposure for units 3, 4 and 5 is\n\\(d_{01} \\equiv \\text{Indirect Exposure}\\), given their direct\nconnection to unit 6, who is directly treated. Likewise for units 7, 8\nand 10 who are degree-one neighbors of unit 9. We can also see that\nthere are no units in exposure condition \\(d_{11}\\), because the two\ndirectly treated units (6 and 9) are not connected to each other.\n\nNow, to obtain the generalized probability of exposure of each unit we\nneed the exposure mapping function and its arguments: the adjacency\nmatrix and the set of all possible treatment assignments. When\n\\(\\left\\vert{\\Omega}\\right\\vert\\) is large we can approximate \\(\\Omega\\)\nproducing random replicate \\(\\mathbf{z}\\)'s. In this case we could easily\ncompute \\(\\Omega\\) because there are only 45 possible treatment\nassignment profiles, but for expository purposes, we work with 30 random\ndraws from \\(\\Omega\\) without replacement (setting the arguments\n\\texttt{R\\ =\\ 30} and \\texttt{allow\\_repetitions\\ =\\ FALSE}).\n\n\\begin{Shaded}\n\\begin{Highlighting}[]\n\\NormalTok{omega <-}\\StringTok{ }\\KeywordTok{make_tr_vec_permutation}\\NormalTok{(}\n\\NormalTok{ N, p,}\n \\DataTypeTok{R =} \\DecValTok{30}\\NormalTok{, }\\DataTypeTok{seed =} \\DecValTok{420}\\NormalTok{, }\\DataTypeTok{allow_repetitions =} \\OtherTok{FALSE}\n\\NormalTok{ )}\n\\NormalTok{prob_exposure <-}\\StringTok{ }\\KeywordTok{make_exposure_prob}\\NormalTok{(}\n\\NormalTok{ omega,}\n\\NormalTok{ adj_matrix,}\n\\NormalTok{ make_exposure_map_AS,}\n \\KeywordTok{list}\\NormalTok{(}\\DataTypeTok{hop =} \\DecValTok{1}\\NormalTok{)}\n\\NormalTok{ )}\n\\KeywordTok{make_prob_exposure_cond}\\NormalTok{(prob_exposure)}\n\\end{Highlighting}\n\\end{Shaded}\n\n\\begin{verbatim}\n [,1] [,2] [,3] [,4] [,5] [,6]\ndir_ind1 0.09677419 0.12903226 0.1612903 0.09677419 0.12903226 0.09677419\nisol_dir 0.19354839 0.09677419 0.1290323 0.12903226 0.06451613 0.12903226\nind1 0.41935484 0.61290323 0.6774194 0.70967742 0.67741935 0.45161290\nno 0.38709677 0.25806452 0.1290323 0.16129032 0.22580645 0.41935484\n [,7] [,8] [,9] [,10]\ndir_ind1 0.1290323 0.12903226 0.1290323 0.1290323\nisol_dir 0.2258065 0.09677419 0.1612903 0.1290323\nind1 0.4193548 0.64516129 0.6129032 0.4516129\nno 0.3225806 0.22580645 0.1935484 0.3870968\n\\end{verbatim}\n\nThe columns capture \\(\\boldsymbol{\\pi}_i\\), the expected proportion of treatment\nassignments which result in each exposure. We can see that the exposure\nprobabilities are very heterogeneous. In this particular case, because\nthe exposure mapping only takes into account whether a unit is treated\nitself and then has at least one treated peer, the exposure\nprobabilities have a close relationship to the number of direct\nconnections with peers---i.e., a unit's network degree. Exposure\nmappings could be more complex, requiring that one account for different\ntraits of units (connections to treated peers of peers and other\ncovariates such as gender or age, for example), in which case exposure\neffects would not necessarily map so straightforwardly to network\ndegree.\n\n\\subsection{Spillover Effects as Contrasts Across\nExposures}\\label{spillover-effects-as-contrasts-across-exposures}\n\nWe now formally define spillover effects as contrasts between averages\nof individual potential outcomes across different exposures. To estimate\nexposure-specific average potential outcomes, the exposure mapping has\nto fully characterize interference. This condition implies that \\(K\\)\ntreatment exposures give rise to at most \\(K\\) distinct potential\noutcomes for each unit \\(i\\) in the population. We write the potential\noutcomes as \\((y_i(d_1), \\dots ,y_i(d_K))\\), where\n\\(y_i(d_k) = y_i(\\mathbf{z})\\) for all units, \\(k \\in \\{1, \\dots ,K\\}\\), and\n\\(\\mathbf{z} \\in \\Omega\\) such that \\(f(\\mathbf{z}, \\theta_i)=d_k\\). Then, observed\noutcomes must relate back to the potential outcomes:\n\\(Y_i = \\sum_{k = 1}^{K} \\mathbf{I}(D_i = d_k)y_i(d_k)\\). The average potential\noutcome at any exposure level \\(k\\) is then\n\\(\\mu(d_k) = \\frac{1}{N} \\sum_{i = 1}^{N} y_i(d_k)\\), and the average\ncausal effect of being in exposure condition \\(d_k\\) as opposed to\nexposure condition \\(d_l\\) is\n\n\\[\\tau(d_k, d_l) = \\frac{1}{N} \\sum_{i = 1}^{N} y_i(d_k) - \\frac{1}{N} \\sum_{i = 1}^{N} y_i(d_l) = \\mu(d_k) - \\mu(d_l).\\]\n\nTo estimate \\(\\sum_{i = 1}^{N} y_i(d_k) = y^T(d_k)\\) we have to take\ninto account that we observe \\(y_i(d_k)\\) only for those with\n\\(D_i=d_k\\), and that the probability of observing \\(y_i(d_k)\\) is not\nequal across units. Using the exposure mapping from the example above,\nthe probability of observing\n\\(y_i(d_{10}) = y_i(\\text{Isolated Direct Exposure})\\) is smaller for\nthose with more direct connections to other units in the network. But as\nwe saw above, by design, we can calculate the probability of the\nexposure conditions for each individual. Then, assuming that all units\nhave nonzero probabilities of being subject to each of the K\nexposures\\protect\\rmarkdownfootnote{If \\(\\pi_i(d_k)=0\\) for some units, then estimation\n of average potential outcomes \\(\\mu(d_k)\\) must be restricted to the\n subset of units for which \\(\\pi_i(d_k)>0\\). Interpretation of\n contrasts of average potential outcomes as causal effects would\n require doing so for units such that \\(\\pi_i(d)>0\\) for all\n \\(d \\in \\Delta' \\subset \\Delta\\); for example, when estimating\n \\(\\tau(d_k, d_l)\\) one would need to restrict analysis to units with\n both \\(\\pi_i(d_k)>0\\) and \\(\\pi_i(d_l)>0\\).}, \\(y^T(d_k)\\) can be\nestimated without bias with the Horvitz-Thompson inverse probability\nestimator:\n\n\\[\\widehat{y^T_{HT}}(d_k) = \\sum_{i = 1}^{N} \\mathbf{I}(D_i = d_k) \\frac{Y_i}{\\pi_i(d_k)}.\\]\nIn cases where \\(\\left\\vert{\\Omega}\\right\\vert\\) is high, and therefore\nwe use sampling from \\(\\Omega\\) to obtain estimates\n\\(\\hat \\pi_i(\\cdot)\\), those estimates are used in place of the true\n\\(\\pi_i(\\cdot)\\) values. A Horvitz-Thompson estimator of the average\nunit-level causal effect of exposure \\(k\\) versus \\(l\\),\n\\(\\tau(d_k, d_l)\\), is therefore\n\\[\\widehat{\\tau_{HT}}(d_k, d_l) = \\widehat{\\mu_{HT}}(d_k) - \\widehat{\\mu_{HT}}(d_l) = \\frac{1}{N} \\left[\\widehat{y^T_{HT}}(d_k) - \\widehat{y^T_{HT}}(d_l) \\right].\\]\n\nContinuing with the previous example, we show how to compute this\nestimator. We do so using simulated potential outcomes that exhibit\neffect heterogeneity and that vary in units' network degree, in which\ncase naive estimates that do not account for probabilities of exposure\nwould be biased. Specifically, we generate a variable with random values\nfrom an absolute standard normal distribution which is correlated with\nthe unit's first and second order degree---the number of peers and peers\nof peers, respectively (for which we use the arguments\n\\texttt{adj\\_matrix} and \\texttt{make\\_corr\\_out} in the function\nbelow). This variable determines the potential outcome under the\n\\(d_{00}\\) condition. To build heterogeneous effects into the analysis,\nwe assume what \\citet{rosenbaum99-dilated} refers to as ``dilated\neffects'' such that \\(y_i(d_{11})=2 \\times y_i(d_{00})\\),\n\\(y_i(d_{10})=1.5 \\times y_i(d_{00})\\),\n\\(y_i(d_{01})=1.25 \\times y_i(d_{00})\\). (The multipliers of\n\\(y_i(d_{00})\\) can be changed by passing a vector with 3 numbers to\n\\texttt{multipliers} in the function below).\n\n\\begin{Shaded}\n\\begin{Highlighting}[]\n\\NormalTok{potential_outcomes <-}\\StringTok{ }\\KeywordTok{make_dilated_out}\\NormalTok{(}\n\\NormalTok{ adj_matrix, make_corr_out, }\\DataTypeTok{seed =} \\DecValTok{1101}\\NormalTok{,}\n \\DataTypeTok{multipliers =} \\OtherTok{NULL}\\NormalTok{, }\\DataTypeTok{hop =} \\DecValTok{1}\n\\NormalTok{ )}\n\\end{Highlighting}\n\\end{Shaded}\n\nFrom the potential outcomes and received exposures (\\texttt{D}), we get\nthe observed outcomes.\n\n\\begin{Shaded}\n\\begin{Highlighting}[]\n\\NormalTok{observed_outcomes <-}\\StringTok{ }\\KeywordTok{rowSums}\\NormalTok{(D}\\OperatorTok{*}\\KeywordTok{t}\\NormalTok{(potential_outcomes))}\n\\end{Highlighting}\n\\end{Shaded}\n\nNext, we compute \\(\\widehat{\\tau_{HT}}(d_{10}, d_{00})\\), which isolates\nthe effect of direct exposure in the absence of any interaction with\nindirect exposure, as well as \\(\\widehat{\\tau_{HT}}(d_{01}, d_{00})\\),\nwhich isolates the effect of indirect exposure in the absence of any\ninteraction with direct exposure. In this case, we cannot compute\n\\(\\widehat{\\tau_{HT}}(d_{11}, d_{00})\\)---the interactive effect of\ndirect and indirect exposure---because in this small scale example no\nunit received exposure \\(d_{11}\\).\n\n\\begin{Shaded}\n\\begin{Highlighting}[]\n\\NormalTok{yT_HT <-}\\StringTok{ }\\KeywordTok{estimates}\\NormalTok{(D, observed_outcomes, prob_exposure, }\\DataTypeTok{hop =} \\DecValTok{1}\\NormalTok{)}\\OperatorTok{$}\\NormalTok{yT_ht}\n\\end{Highlighting}\n\\end{Shaded}\n\nThe object \\texttt{yT\\_HT} is a named numeric vector which contains the\nvalues of \\(\\widehat{y^T_{HT}}(d_k)\\) for\n\\(d_k=d_{11}, d_{10}, d_{01}, d_{00}\\) in that order. Therefore, in\norder to compute \\(\\widehat{\\tau_{HT}}\\), we can take the difference of\neach of these values with the value of exposure condition\n\\(d_{00}(\\text{No Exposure})\\) and then divide by the number of units\n\\(N\\).\n\n\\begin{Shaded}\n\\begin{Highlighting}[]\n\\NormalTok{tau_HT <-}\\StringTok{ }\\NormalTok{((}\\DecValTok{1}\\OperatorTok{\/}\\NormalTok{N)}\\OperatorTok{*}\\NormalTok{(yT_HT}\\OperatorTok{-}\\NormalTok{yT_HT[}\\StringTok{'no'}\\NormalTok{])[}\\KeywordTok{names}\\NormalTok{(yT_HT)}\\OperatorTok{!=}\\StringTok{'no'}\\NormalTok{])}\n\\NormalTok{tau_HT}\n\\end{Highlighting}\n\\end{Shaded}\n\n\\begin{verbatim}\ndir_ind1 isol_dir ind1 \n NA 41.61043 39.51643 \n\\end{verbatim}\n\nIn fact, the \\texttt{estimates} function already computes\n\\(\\widehat{\\tau_{HT}}\\) directly:\n\n\\begin{Shaded}\n\\begin{Highlighting}[]\n\\KeywordTok{estimates}\\NormalTok{(D, observed_outcomes, prob_exposure, }\\DataTypeTok{hop=}\\DecValTok{1}\\NormalTok{)}\\OperatorTok{$}\\NormalTok{tau_ht}\n\\end{Highlighting}\n\\end{Shaded}\n\n\\begin{verbatim}\ndir_ind1 isol_dir ind1 \n NA 41.61043 39.51643 \n\\end{verbatim}\n\nThe estimator \\(\\widehat{\\tau_{HT}}(d_{k}, d_{l})\\) is unbiased when is\nestimated with \\({\\pi_i(d_k)}\\) rather than \\({\\hat{\\pi}_i(d_k)}\\). As\nmentioned above, when estimated with the latter, the estimator is not\nunbiased, but the bias becomes negligible with a sufficiently large\nnumber of replicates \\(R\\). We implement variance estimators for\n\\(\\mathrm{Var}\\left[\\widehat{y^T_{HT}}(d_k)\\right]\\) and\n\\(\\mathrm{Var}\\left[\\widehat{\\tau_{HT}}(d_{k}, d_{l})\\right]\\) as derived in\nEquation 11 of \\citet{aronow2017estimating}. These are conservative\napproximations to the exact variances that are guaranteed to have\nnon-negative bias relative to the variance of the randomization\ndistribution of the estimators. Because unbiased estimators for\n\\(\\mathrm{Var}\\left[\\widehat{y^T_{HT}}(d_k)\\right]\\) are only identified when\nthe joint exposure probabilities of every pair of units is positive, it\nis necessary to add a correction term to\n\\(\\widehat{\\mathrm{Var}}\\left[\\widehat{y^T_{HT}}(d_k)\\right]\\). Second, because\nthe\n\\(\\mathrm{Cov}\\left[\\widehat{y^T_{HT}}(d_k), \\widehat{y^T_{HT}}(d_l)\\right]\\) is\nalways unidentified, we use an approximation. Both of these corrections,\ncontribute to the non-negative bias of the variance estimator. We also\nimplement the constant effects variance estimator derived in\n\\citet{aronow2013dissertation}. This estimator operates under the\nassumption that exposure effects do not vary across subjects, and\ntherefore \\(y_i(d_k)=y_i(d_l) + \\tau(d_{k}, d_{l})\\) for every unit\n\\(i\\). Then, one can estimate the variance by either plugging the\nestimated \\(\\tau(d_{k}, d_{l})\\) values into the expression of the\nvariance or using them to reconstruct the full schedule of potential\noutcomes and then simulating new treatment assignments to approximate\nthe distribution of effect estimates. In the following applications, we\ntake the maximum between the constant effects variance estimator and the\nconservative variance estimator developed in\n\\citet{aronow2017estimating}. Confidence intervals are based on a\nlarge-N normal approximation:\n\\(\\widehat{\\tau_{HT}}(d_{k}, d_{l}) \\pm z_{(1-\\alpha)\/2} \\sqrt{\\widehat{\\mathrm{Var}}\\left[\\widehat{\\tau_{HT}}(d_{k}, d_{l})\\right]}\\).\n\nAsymptotic convergence of these estimators, and therefore the\nreliability of normal approximations for inference, depend on whether\noutcome dependence across units is limited. In particular, consistency\nof \\(\\widehat{\\tau_{HT}}(d_{k}, d_{l})\\) follows from limits on the\namount of pairwise dependency in exposure conditions induced by both the\ndesign and the exposure mapping as the sample size increases. Going back\nto the antibullying program experiment of \\citet{paluck2016changing},\nthis condition implies that as new students are added to a school, the\nnew peer connections that result between them an existing students\ncannot be too extensive.\n\nAn alternative to the Horvitz-Thompson estimator is the Hajek estimator,\nwhich improves efficiency with a small cost in terms of finite sample\nbias. This estimator is a ratio approximation of the Horvitz-Thompson:\n\\[ \\widehat{\\mu}_H(d_k) = \\frac{\\sum_{i = 1}^{N} \\mathbf{I}(D_i = d_k) \\frac{Y_i}{\\pi_i(d_k)}}{\\sum_{i = 1}^{N} \\mathbf{I}(D_i = d_k) \\frac{1}{\\pi_i(d_k)}}.\\]\n\nIn the Horvitz-Thompson estimator \\(\\widehat{\\mu_{HT}}(d_k)\\) is high\nvariance because some randomizations yield units with extremely high\nvalues of the weights \\(1\/\\pi_i(d_k)\\). The Hajek refinement allows the\ndenominator of the estimator to vary according to the sum of the weights\n\\(1\/\\pi_i(d_k)\\), therefore shrinking the magnitude of the estimator\nwhen its value is large, and increasing the magnitude of the estimator\nwhen its value is small.\n\nWe extend the example developed above to consider a more realistic\nsample size. In what follows here and in the following sections, we use\na set of \\(N=400\\) units and we randomly assign without replacement a\nproportion \\(p=0.1\\) to treatment condition \\(z_i=1\\) with uniform\nprobability. As in the previous example, the network is modeled as\nsmall-world with each unit directly connected on average to four other\nunits, and the potential outcomes follow the ``dilated effects''\nscenario. We set the number of replications to \\(R=10000\\) to compute\nexposure probabilities, and run 3000 simulated replications of the whole\nexperiment.\n\n\\begin{table}\n\n\\caption{\\label{tab:tbhth}Comparing Horvitz-Thompson to Hajek Estimator, Using Approximate Exposure Probabilities}\n\\centering\n\\begin{threeparttable}\n\\begin{tabular}[t]{c|c|c|c|c|c|c|c}\n\\hline\nEstimator & Estimand & True-Value & Average-Value & Bias & SD & RMSE & MeanSE\\\\\n\\hline\nHorvitz-Thompson & $\\tau(d_{11}, d_{00})$ & 58.12 & 57.33 & -0.78 & 45.32 & 45.32 & 45.94\\\\\n\\hline\nHorvitz-Thompson & $\\tau(d_{10}, d_{00})$ & 29.06 & 29.07 & 0.01 & 26.60 & 26.60 & 33.11\\\\\n\\hline\nHorvitz-Thompson & $\\tau(d_{01}, d_{00})$ & 14.53 & 14.66 & 0.13 & 10.45 & 10.45 & 10.33\\\\\n\\hline\nHajek & $\\tau(d_{11}, d_{00})$ & 58.12 & 59.41 & 1.29 & 36.66 & 36.68 & 34.07\\\\\n\\hline\nHajek & $\\tau(d_{10}, d_{00})$ & 29.06 & 28.92 & -0.14 & 21.55 & 21.55 & 25.09\\\\\n\\hline\nHajek & $\\tau(d_{01}, d_{00})$ & 14.53 & 14.85 & 0.32 & 8.38 & 8.39 & 8.41\\\\\n\\hline\n\\end{tabular}\n\\begin{tablenotes}\n\\item \\textit{Note: } \n\\item Horvitz\u2013Thompson estimator with maximum between conservative variance estimator and constant effects variance estimator. Hajek estimator with linearized variance estimator. True-Value = Value of estimand. Average-Value = Value of estimator. SD = Empirical standard deviation from simulation. RMSE = Root-mean-square error. MeanSE = mean standard error estimate. Estimators use approximate exposure probabilities calculated by drawing 10000 treatment assignments without replacement.\n\\end{tablenotes}\n\\end{threeparttable}\n\\end{table}\n\nThe result of the simulation shown in Table \\ref{tab:tbhth} illustrates\nthat the Hajek estimator is more efficient than the Horvitz-Thompson\nestimator. The empirical standard deviation from simulation is smaller\nfor the Hajek estimator, and the decrease in variance is with little\ncost in bias as indicated by a smaller root-mean-square error. Moreover,\nthe variance estimator is consistent, given that the mean of the\nstandard error estimate approaches the empirical standard deviation from\nsimulation.\n\n\\subsection{Misspecified exposure\nmappings}\\label{misspecified-exposure-mappings}\n\nWe now consider the implications of misspecified exposure mappings.\nProposition 8.1 in \\citet{aronow2017estimating} show what happens when\nan exposure condition \\(D_i = d_k\\) specified by the experimenter is\nactually consistent with multiple potential outcomes for unit \\(i\\), in\nwhich case the exposure mapping is too coarse. Here, we examine these\nissues in the context of our running example. We consider the case where\none assumes that interference happens only through direct peer\nconnections (first-degree interference), but in fact units are exposed\nto treatment of their direct peers and to treatment of peers of their\ndirect peers (it is second-degree). We also consider what happens when\nthe experimenter ignores interference, but in fact it is first- or\nsecond-degree.\n\nLet us begin with an exposure mapping as above with four exposures for\nfirst-degree interference (\\(d_{11}\\), \\(d_{10}\\), \\(d_{01}\\),\n\\(d_{00}\\)), and then another exposure mapping with eight exposures\nbased on second-order interference (\\(d_{111}\\), \\(d_{110}\\),\n\\(d_{101}\\), \\(d_{100}\\), \\(d_{011}\\), \\(d_{010}\\) \\(d_{001}\\),\n\\(d_{000}\\)), and finally a no-interference exposure mapping with only\ntwo exposures (\\(d_{1}\\), \\(d_{0}\\)). Then, suppose two possible true\ndata generating processes, with one exhibiting only first-degree\ninterference and the other exhibiting second-degree interference. The\nexposure mapping is misspecified when the type of interference assumed\nby the experimenter does not match the true data generating process.\nThis gives rise to six scenarios, two of which have correct exposure\nmapping specifications and the rest being cases of misspecification.\n\nTo define a coherent notion of bias under misspecification, one needs to\ndefine quantities of interest in terms of treatment regimes. In our\ncase, we consider the contrast between average outcomes under 100\\%\ntreatment saturation versus 0\\% saturation: \\[\n\\tau(\\mathbf{1},\\mathbf{0}) = \\frac{1}{N}\\sum_{i=1}^Ny_i(\\mathbf{1}) - y_i(\\mathbf{0}),\n\\] where the \\(\\mathbf{1}\\) and \\(\\mathbf{0}\\) are meant to denote that\n100\\% or 0\\% of units are assigned to treatment, respectively.\n\nWhen the true data generating process involves no interference, then\n\\(\\tau(\\mathbf{1},\\mathbf{0})\\) is equivalent to the usual average\ntreatment effect (ATE). Under interference, this is not the case. When\nthe true data generating process involves only the first-degree\nspillover as per our running example, then\n\\(\\tau(\\mathbf{1},\\mathbf{0}) = \\tau(d_{11}, d_{00})\\). With\nsecond-degree spillover,\n\\(\\tau(\\mathbf{1},\\mathbf{0}) = \\tau(d_{111}, d_{000})\\).\nMisspecification will result in working with inappropriate contrasts to\nestimate \\(\\tau(\\mathbf{1},\\mathbf{0})\\). For example, suppose the true\ndata generating process is first-degree, but one assumes no\ninterference. Then, one would mistakenly take the potential outcomes\n\\(y_i(d_{11})\\) to be equivalent to \\(y_i(d_{10})\\), and use a mixture\nof such outcomes in estimating the desired average of\n\\(y_i(\\mathbf{1})\\) outcomes. To see this, consider estimating the\npopulation mean when everyone is treated, \\(\\mu(\\mathbf{1})\\), using a\nHorvitz-Thompson estimator (which we use here for simplicity, results\nfor the Hajek estimator would be along the same lines). Then, if we\nassume no interference, we would compute\n\n\\[\n\\begin{aligned}\n\\widehat{\\mu_{HT, None}}(\\mathbf{1}) & = \\frac{1}{N}\\sum_{i=1}^N \\mathbf{I}(Z_i = 1)\\frac{Y_i}{\\pi_i} \\\\\n& = \\frac{1}{N}\\sum_{i=1}^N \\left(\\mathbf{I}(D_i = d_{11})\\frac{y_i(d_{11})}{\\pi_i} + \\mathbf{I}(D_i = d_{10})\\frac{y_i(d_{10})}{\\pi_i}\\right),\n\\end{aligned}\n\\] where \\(\\pi_i = \\text{Pr}[Z_i=1]\\), while the unbiased estimator\nwould be, \\[\\begin{aligned}\n\\widehat{\\mu_{HT}}(\\mathbf{1}) = \\widehat{\\mu_{HT}}(d_{11}) & = \\frac{1}{N}\\sum_{i=1}^N \\mathbf{I}(D_i = d_{11})\\frac{Y_i}{\\pi_i(d_{11})} \\\\\n& = \\frac{1}{N}\\sum_{i=1}^N \\mathbf{I}(D_i = d_{11})\\frac{y_i(d_{11})}{\\pi_i(d_{11})},\n\\end{aligned}\\] where\n\\(\\pi_i(d_{11}) = \\text{Pr}[Z_i\\mathbf{I}(\\mathbf{Z}'\\theta_i>0) = 1]\\).\n\nTherefore, the misspecified \\(\\widehat{\\mu_{HT, None}}(\\mathbf{1})\\) is\nbiased for \\(\\widehat{\\mu_{HT}}(\\mathbf{1})\\) insofar as\n\\(y_i(d_{11}) \\ne y_i(d_{10})\\) for some \\(i\\).\n\nWhen the assumed exposure mapping considers higher-degree interference\nthan the true data-generating process, the resulting estimator can be\nunbiased, but with a cost in variance. The reason is that this\nmisspecified estimator incorporates only a fraction of the available\nunits to construct the potential outcome average.\n\nTable \\ref{tab:tbposneg} and Figure \\ref{fig:miss_exposure} illustrate\nhow estimates vary over these different forms of misspecification in our\nsimulated data. For the sake of completeness, we also present results\nfor another simulation where the spillover effects are negative\n(essentially, choosing dilated effects multipliers when simulating\noutcome data such that the multipliers for \\(d_{11}\\) and \\(d_{01}\\) are\nsmaller than those of \\(d_{10}\\) and \\(d_{00}\\), respectively). Looking\nat Figure \\ref{fig:miss_exposure}, from left to right we plot the\ndistribution of point estimates for \\(\\widehat{\\tau}(d_{1}, d_{0})\\)\n(assuming no interference), \\(\\widehat{\\tau_{H}}(d_{11}, d_{00})\\)\n(assuming first-degree interference), and\n\\(\\widehat{\\tau_{H}}(d_{111}, d_{000})\\) (assuming second-degree\ninterference). Then, we vary whether the true data generating process\nexhibits first- or second-degree interference. In all cases, the target\nof inference is \\(\\tau(\\mathbf{1}, \\mathbf{0})\\). Under first-degree\ninterference, \\(\\tau(\\mathbf{1}, \\mathbf{0}) =\\tau(d_{11}, d_{00})\\),\nand under second-degree interference\n\\(\\tau(\\mathbf{1}, \\mathbf{0})=\\tau(d_{111}, d_{000})\\). These\nvariations in data-generating processes are shown going up and down\nFigure \\ref{fig:miss_exposure}, for the positive spillovers case in the\ntop panel and the negative spillovers case in the bottom panel. True\nvalues of the target quantities are shown with the dashed lines, and the\ndistributions of estimators over 3000 simulation runs are shown with the\nhistograms. We see that estimators are centered around the true\nquantities when the exposure mapping incorporates equal- or\nhigher-degree interference than the true data generating process.\nBecause the estimator incorporates only a fraction of the available\nunits to construct the potential outcome average, the variance is higher\nwhen the exposure mapping considers higher-degree interference than the\ntrue data generating process. To the contrary, the estimators are biased\nwhen the exposure mapping considers a lower-degree interference than the\ntrue data generating process. In this case, as we explained above,\npotential outcomes under different exposure conditions are taken to be\nequivalent, and therefore, averaged together by the estimator when\nconstructing the potential outcome average. With positive spillover the\nestimators' bias is negative (top panel), whereas the bias is positive\nwith a case of negative spillover (bottom panel). For monotonic\ninterference of the kinds considered here, bias is reduced more by\nconsidering exposure mappings with more refined patterns of interference\n(for example, no interference vs.~first-degree interference), as shown\nin Theorem 2.3 of \\citet{eckles2017design}. Table \\ref{tab:tbposneg}\npresents the evaluation metrics for each of the six scenarios depicted\nin Figure \\ref{fig:miss_exposure}: bias, standard deviation of simulated\nestimates and root-mean-square error.\n\nNote that this example uses a relatively small set of units (\\(N=400\\)),\nand the unbiased estimators (in this case,\n\\(\\widehat{\\tau_{H}}(d_{11}, d_{00})\\),\n\\(\\widehat{\\tau_{H}}(d_{111}, d_{000})\\)) work with smaller subsets of\nthe data than the coarser, biased estimators (here,\n\\(\\widehat{\\tau_{H}}(d_{1}, d_{0})\\)). This is apparent if one looks at\nthe standard deviations (SD) in Table \\ref{tab:tbposneg}. As such, with\nthese sample sizes, the root-mean-square error is swamped by estimation\nvariance relative to bias. As \\(N\\) gets larger, this would change: the\nSD would get smaller, but the bias would remain.\n\n\\begin{figure}\n\n{\\centering \\includegraphics[width=.8\\linewidth]{.\/unnamed-chunk-12-1} \\includegraphics[width=.8\\linewidth]{.\/unnamed-chunk-12-2} \n\n}\n\n\\caption{\\label{fig:miss_exposure}From left to right the plot shows the distribution of the point estimates for 3000 simulations when the exposure mapping ignores interference, assumes first-degree, and second-degree interference, respectively. The top panel is for a case of positive spillover effects, and the bottom panel is for a case of negative spillover effects. In each panel, the upper-row DGP is first-degree interference and the lower-row DGP is second-degree interference. The dashed vertical line represents the true value of the target quantity, which is the average effect of going from no one treated to everyone treated.}\\label{fig:unnamed-chunk-12}\n\\end{figure}\n\n\\begin{table}\n\n\\caption{\\label{tab:tbposneg}Mispecifying Exposure Conditions}\n\\centering\n\\begin{threeparttable}\n\\begin{tabular}[t]{c|c|c|c|c|c|c}\n\\hline\nSpillover & Estimand & Estimator & True-Value & Bias & SD & RMSE\\\\\n\\hline\npositive & $\\tau(d_{11}, d_{00})$ & $\\hat{\\tau_H}(d_{1}, d_{0})$ & 58.12 & -23.33 & 19.06 & 30.12\\\\\n\\hline\npositive & $\\tau(d_{11}, d_{00})$ & $\\hat{\\tau_H}(d_{11}, d_{00})$ & 58.12 & 1.29 & 36.66 & 36.68\\\\\n\\hline\npositive & $\\tau(d_{11}, d_{00})$ & $\\hat{\\tau_H}(d_{111}, d_{000})$ & 58.12 & 3.90 & 44.14 & 44.30\\\\\n\\hline\npositive & $\\tau(d_{111}, d_{000})$ & $\\hat{\\tau_H}(d_{1}, d_{0})$ & 72.65 & -32.21 & 21.42 & 38.68\\\\\n\\hline\npositive & $\\tau(d_{111}, d_{000})$ & $\\hat{\\tau_H}(d_{11}, d_{00})$ & 72.65 & -6.55 & 40.92 & 41.44\\\\\n\\hline\npositive & $\\tau(d_{111}, d_{000})$ & $\\hat{\\tau_H}(d_{111}, d_{000})$ & 72.65 & 4.28 & 48.95 & 49.13\\\\\n\\hline\nnegative & $\\tau(d_{11}, d_{00})$ & $\\hat{\\tau_H}(d_{1}, d_{0})$ & 14.53 & 14.87 & 14.73 & 20.93\\\\\n\\hline\nnegative & $\\tau(d_{11}, d_{00})$ & $\\hat{\\tau_H}(d_{11}, d_{00})$ & 14.53 & 0.85 & 23.36 & 23.37\\\\\n\\hline\nnegative & $\\tau(d_{11}, d_{00})$ & $\\hat{\\tau_H}(d_{111}, d_{000})$ & 14.53 & 2.77 & 30.23 & 30.35\\\\\n\\hline\nnegative & $\\tau(d_{111}, d_{000})$ & $\\hat{\\tau_H}(d_{1}, d_{0})$ & 7.26 & 23.92 & 13.65 & 27.54\\\\\n\\hline\nnegative & $\\tau(d_{111}, d_{000})$ & $\\hat{\\tau_H}(d_{11}, d_{00})$ & 7.26 & 6.42 & 21.27 & 22.21\\\\\n\\hline\nnegative & $\\tau(d_{111}, d_{000})$ & $\\hat{\\tau_H}(d_{111}, d_{000})$ & 7.26 & 2.58 & 28.05 & 28.16\\\\\n\\hline\n\\end{tabular}\n\\begin{tablenotes}\n\\item \\textit{Note: } \n\\item Hajek estimator with linearized variance estimator. True-Value = Value of estimand. SD = Empirical standard deviation from simulation. RMSE = Root-mean-square error. Estimators use approximate exposure probabilities computed by drawing 10000 treatment assignments without replacement from the set of possible treatment profiles.\n\\end{tablenotes}\n\\end{threeparttable}\n\\end{table}\n\n\\subsection{Misspecified network ties}\\label{misspecified-network-ties}\n\nAnother type of misspecification is when the network measured by the\nexperimenter is different than the actual interference network. In this\ncase, the bias of the estimator will increase with the proportion of\nmis-measured ties. For example, with positive spillover and an elicited\nnetwork with missing ties, one would mistakenly take a mixture of the\npotential outcomes \\(y_i(d_{01})\\) and \\(y_i(d_{00})\\) in estimating the\naverage of \\(y_i(d_{00})\\), leading to an overestimate and thereby\ncontributing negative bias for any estimate of an effect relative to\n\\(\\mu(d_{00})\\).\n\nFigure \\ref{fig:miss_ties} shows more general consequences of working\nwith network estimates based on a network that is randomly missing\nvarying proportions of ties in the interference network. The data\ngenerating process is the first-degree interference set-up described\nabove and the figure shows results from 3000 simulation draws, plotting\ndistributions of point estimates for\n\\(\\widehat{\\tau_{H}}(d_{01}, d_{00})\\),\n\\(\\widehat{\\tau_{H}}(d_{10}, d_{00})\\), and\n\\(\\widehat{\\tau_{H}}(d_{11}, d_{00})\\). The distribution of point\nestimates is centered around the target quantity (the dashed line) when\nthe proportion of unmeasured ties is zero, indicating unbiasedness.\nHowever, as the proportion of unmeasured ties increases the\ndistribution's shift to the left or right, depending on the net effect\nof the biases for the estimated potential outcome means.\n\n\\begin{figure}\n\n{\\centering \\includegraphics{.\/unnamed-chunk-13-1} \n\n}\n\n\\caption{\\label{fig:miss_ties}The plot shows the distribution of point estimates for 3000 simulations given different proportions of missing ties for a case of positive spillover. The dashed vertical line represents the true value of the target quantity.}\\label{fig:unnamed-chunk-13}\n\\end{figure}\n\n\\subsection{Sensitivity analysis for\nmisspecification}\\label{sensitivity-analysis-for-misspecification}\n\n\\citet{egami2017unbiased} proposes a sensitivity analysis for\nmisspecification of the exposure mapping due to unobserved networks. The\nsensitivity analysis considers a situation where spillover effects occur\non an unobserved ``offline'' network when the experimenter only observes\nan ``online'' network. The case captures situations where one uses, for\nexample, online social network data like a Twitter follower network to\nspecify the interference network, but that interference can also occur\nvia offline ties like a network of high school classmates that is not\ncaptured by the online network. The analysis could apply to any\nsituation where the measured network fails to capture all relevant ties\nin the true interference network. The sensitivity analysis focuses on\nestimating the \\emph{average network-specific spillover effect} (ANSE),\nwhich is the average causal effect of changing the treatment status of\nneighbors in the online network (for example, status of Twitter ties),\nwithout changing the treatment status of neighbors in the offline\nnetwork (high school classmates). The analysis also requires the\n\\emph{stratified interference} assumption (to which we return below),\nwhich assumes that potential outcomes of unit \\emph{i} are affected by\n\\emph{i}'s own treatment assignment and only the treated\n\\emph{proportion} of online and offline neighbors; the precise set of\ntreated neighbors does not matter. Under this assumption,\n\\citet{egami2017unbiased} develops parametric and nonparametric\nsensitivity analysis methods. The parametric method in addition assumes\nthat the unobserved network spillover effect is linear and additive, and\nhelps to derive simple formal conditions under which unobserved networks\nwould explain away the estimated ANSE. The nonparametric method assumes\ninstead that outcomes are just non-negative, and is used to bound the\nANSE.\n\n\\subsection{Efficient designs to estimate effects under\ninterference}\\label{efficient-designs-to-estimate-effects-under-interference}\n\nWe now consider implications of the preceding analysis for designing\nexperiments that estimate exposure-specific effects efficiently. Let us\nfirst consider why such designs are needed. If the goal is to estimate\nthe average difference in unit potential outcomes under 100\\% versus 0\\%\ntreatment saturation, defined as \\(\\tau(\\mathbf{1},\\mathbf{0})\\) above,\nnaive designs can perform poorly. \\citet{ugander2013graph} show that\nunit-level designs need not even yield asymptotically consistent\nestimators under first-degree interference if degree is also increasing\nas the sample size does. The problem is that very few units end up with\neither all or no first-degree neighbors treated.\n\nAnalogously to designs for the partial interference setting considered\nbelow, \\citet{ugander2013graph} propose cluster-randomized designs. In\nthis \\emph{graph cluster randomization} the network (or graph) is\npartitioned into a set of clusters, such that units closer to each other\nin the network are assigned to the same cluster, and then treatment\nrandomization is performed at the cluster level. Estimation then employs\nthe inverse-probability weighted methods described above. When\n\\(\\tau(\\mathbf{1},\\mathbf{0})\\) is the estimand, graph cluster\nrandomization can lead to exponentially (in sample size) lower estimator\nvariance as compared to unit-level random assignment. This is because\nunder graph cluster randomization, connected units are assigned to the\nsame treatment condition more often than would happen with unit level\nassignment, increasing the expected number of units who are exposed to\none of the full neighborhood exposure conditions. On the other hand,\nassigning units by cluster can contribute to increases in variance,\nespecially insofar as it means that---due to homophily---units with\nsimilar outcomes will tend to be assigned to one or another exposure\ncondition together. \\citet{ugander2013graph} analyze an intuitive graph\nclustering method, \\(\\epsilon\\)-net clustering, where clusters are\nformed by finding a set of units such that all units in the set are at\nleast \\(\\epsilon\\) hops of each other, and every unit outside the set is\nwithin \\(\\epsilon-1\\) hops of a unit in the set. With \\(\\epsilon=3\\)\ngraph cluster randomization has some desirable asymptotic properties\neven as average degree is growing with the sample size. In practice,\nexperimenters can use many other methods for graph partitioning or\ncommunity detection, but these are more difficult to study analytically;\n\\citet{saveski2017detecting} includes empirical comparison of such\nmethods.\n\nWe compare the Horvitz-Thompson and Hajek estimators and their variance\nfor the causal estimand of full neighborhood exposure,\n\\(\\tau(d_{\\mathbf{1}},d_{\\mathbf{0}})\\), for the case in which units are\nassigned to treatment via unit-level randomization as opposed to 3-net\nclustering randomization. In both cases, our simulation assumes that\ntreatment assignment has a \\(\\text{Bernoulli}(1\/2)\\) distribution, there\nare 400 units connected by a small world network with average degree 4,\nand units respond to treatment following a dilated effects scenario in\nwhich the outcomes have a positive lower bound and are correlated with\nthe unit's first and second order degree. To compute the exposure\nprobabilities, we set the number of replications to \\(R=10000\\). We run\n3000 simulated replications of the experiment. The results of the\nsimulation are presented in Table \\ref{tab:tbunitcluster}. They show\nthat graph cluster randomization leads to substantially lower estimator\nvariability with no cost in bias as indicated by a smaller\nroot-mean-square error.\n\nOn the other hand, if the exposure model is misspecified, neither design\nyields unbiased or consistent estimators; however, under some models in\nwhich interference is monotonic, graph cluster randomization can\nnonetheless reduce bias at the cost of variance\n\\citep{eckles2017design}.\n\nThe above discussion considers contrasting 100\\% versus 0\\% treatment\nsaturation, for which graph cluster randomization---and other designs\nthat produce network autocorrelation in treatment---can be advantageous.\nOther estimands may motivate quite different experimental designs. For\nexample, if the goal is to estimate interference effects (e.g.,\n\\(\\tau(d_{01}, d_{00})\\)), graph clustered randomization with a perfect\npartitioning of a network into its \\(k\\) connected components may make\nestimation impossible, since all units will have the same treatment as\ntheir neighbors.\n\n\\begin{table}\n\n\\caption{\\label{tab:tbunitcluster}Comparing Unit to Cluster Randomization}\n\\centering\n\\begin{threeparttable}\n\\begin{tabular}[t]{c|c|c|c|c|c|c|c}\n\\hline\nRandomization & Estimator & Estimand & True-Value & Average-Value & Bias & SD & RMSE\\\\\n\\hline\nUnit & Horvitz-Thompson & $\\tau(d_{\\mathbf{1}}, d_{\\mathbf{0}})$ & 58.12 & 55.70 & -2.42 & 139.65 & 139.65\\\\\n\\hline\nCluster & Horvitz-Thompson & $\\tau(d_{\\mathbf{1}}, d_{\\mathbf{0}})$ & 58.12 & 57.52 & -0.60 & 61.31 & 61.30\\\\\n\\hline\nUnit & Hajek & $\\tau(d_{\\mathbf{1}}, d_{\\mathbf{0}})$ & 58.12 & 48.27 & -9.85 & 52.89 & 53.79\\\\\n\\hline\nCluster & Hajek & $\\tau(d_{\\mathbf{1}}, d_{\\mathbf{0}})$ & 58.12 & 55.01 & -3.10 & 28.89 & 29.05\\\\\n\\hline\n\\end{tabular}\n\\begin{tablenotes}\n\\item \\textit{Note: } \n\\item Unit = unit-level randomization. Cluster = 3-net clustering. Horvitz\u2013Thompson estimator with maximum between conservative variance estimator and constant effects variance estimator. Hajek estimator with linearized variance estimator. True-Value = Value of estimand. Average-Value = Value of estimator. SD = Empirical standard deviation from simulation. RMSE = Root-mean-square error. Estimators use approximate exposure probabilities computed by drawing 10000 treatment assignments without replacement.\n\\end{tablenotes}\n\\end{threeparttable}\n\\end{table}\n\n\\subsection{Empirical Studies}\\label{empirical-studies}\n\n\\citet{bond201261} analyze the influence of peers within a large scale\nvoter-mobilization network experiment delivering messages to 61 million\nFacebook users during the day of the 2010 U.S. Congressional Election.\nUsers were randomly assigned to a social message (98\\% of users), an\ninformational message (1\\%) or a control condition (1\\%). The two\ntreatment conditions encouraged users to vote, provided a polling-place\nlink, allow them to express they had voted by clicking an ``I voted''\nbutton and showed a counter with the number of users who had previously\nreported voting. In addition, the social message presented pictures of\nfriends who had reported voting. The polling-place link was used as a\nmeasure of desire to seek information about the election and the ``I\nvoted'' button as self-reported voting. Additionally, the turnout of\nabout 10\\% of the users was validated with public voting records. The\nestimated results suggest positive direct effects of the social message\non information seeking, self-reported voting and validated voting,\nwhereas the informational message did not affect turnout. Results\nsuggest positive spillover effects on ``close friends'' with whom they\ninteract frequently on Facebook and who are likely to have real-world,\nface-to-face relationships, but no effects on distant friends. The way\nin which \\citet{bond201261} analyzed spillover effects did not properly\naccount for direct effects, a problem discussed by\n\\citet{taylor-eckles-social-influence-experiments} when considering\nsharp null hypothesis tests on indirect effects.\n\nA follow up study was conducted by \\citet{jones2017social} during the\n2012 Presidential Election. In this study, the experimental conditions\nwere adjusted to better understand the mechanisms that were likely to\ndrive effects and the analysis of spillover effects corrected the\nproblems with \\citet{bond201261}. Instead of treating the large majority\nof users with the social message condition and assigning the rest to an\ninformational or control conditions, this time a 2x2 design varied\nwhether or not individuals saw a post at the top of their News Feeds\nthat encouraged turnout in a similar way than the social message (the\nbanner condition), and whether or not users saw individual posts within\ntheir News Feed regarding friends' voting if at least one of their\nfriends in the banner condition had clicked on the ``I voted'' button\n(the feed condition). In the 2010 experiment, users in the banner\ncondition also saw these messages within their feeds, not allowing to\nseparate the encouragement from the social effect. Findings suggest that\nusers directly exposed to both the banner and feed conditions were\nsignificantly more likely to have voted than those in the control\ncondition. Spillover effects happened through those encouraged to vote\nwith the banner condition, whereas the feed treatment did not spill over\nto close friends. However, this time there were other differences in the\nmessaging due to the context of the 2x2 design. For example, messages in\nthe feed condition did not contain the button to self-report voting or\nthe link to find the polling place. These features would also need to be\nrandomized to learn more about the differences between indirect effects\nacross the banner and feed conditions.\n\n\\citet{coppock2016treatments} explore direct and spillover effects of a\nmobilization campaign on Twitter on informal political participation,\nparticularly signing an online petition and sharing the petition link.\nFollowers of an environmental nonprofit advocacy organization were\nrandomly assigned to receive a private (2\/3 of followers) or a public\nmessage (1\/3), and those receiving the private message were either\nprimed to have an identity of high commitment (organizer) or of low\ncommitment (follower) in equal shares. A second manipulation encouraged\na random subset of petition signers to share the petition with their own\nfollowers (who are also followers of the environmental organization and\nwho mostly follow one petition signer) as a way to measure spillover\neffects among those who actually respond directly to treatment. The\ndesign of this second manipulation reduces the number of exposure\nconditions that would otherwise be prohibitively large when the\nresearcher is interested in measuring indirect effects for every\npossible number of treated users who are followed (from 0 to 601,\naccording to the largest units' degree in this network), while\naccounting for exposure probabilities, and without parameterizing the\nresponse to exposure in the exposure mapping. Results show that direct\nmessages boost petition signatures and tweet behavior, and that priming\nthe follower identity is more effective than the organizer identity.\nRegarding indirect effects there is evidence that signing the petition\nwas influenced by others' treatment.\n\nRecent empirical studies exploiting offline networks include\n\\citet{green2016effects} who analyze spatial spillover effects in a\nseries of field experiments testing the impact of lawn signs on vote\noutcomes by planting them in randomly selected voting precincts. In this\ncase, to account for indirect effects the experimental design ensured\nthat two neighboring precincts would not be assigned to direct treatment\nat the same time. \\citet{baicker2005spillover} exploits exogenous shocks\nto state medical spending to explore if spending decisions spill over to\nneighboring states, while \\citet{isen2014local} leverages a\ndiscontinuity from local referendum results to assess if fiscal\ndecisions of one jurisdiction, particularly taxing and spending,\ninfluence the fiscal decisions of its neighbors, and\n\\citet{rogowski2012estimating} use the House office lottery (in which\nnewly elected members select their office spaces in a randomly chosen\norder) as an instrumental variable to estimate the impact of legislative\nnetworks on roll call behavior and cosponsorship decisions. The landmark\nstudy by \\citet{sacerdote2001peer} uses natural random assignment of\ncollege roommates to measure spillover effects on educational\nperformance outcomes.\n\nOther studies have focused on strategies for targeting interventions in\nnetworks (i.e., seeding) so as to capitalize on heterogeneous\nspillovers. While there is a large theoretical and algorithmic\nliterature on this problem (influence maximization), triggered by\n\\citet{kempe2003maximizing} who provide a set of algorithms to maximize\nbehavior diffusion when the researcher has knowledge of the network,\nthere are only a few randomized experiments. These typically rely on\nimposing partial interference assumptions (see below) so that outcomes\nof different, villages or schools, for example, can be treated as\nindependent observations. Some studies used random or haphazard\nassignment of treatment to analyze what types of units produce the\nlargest spillovers. For example, the previously mentioned antibullying\nprogram study by \\citet{paluck2016changing}, measured the network\nstructure of 56 schools in New Jersey to analyze peer diffusion effects\nof randomly selected seed groups of students encouraged to take a stance\nagainst conflict at school, finding that students with more direct\nconnections are the most effective at influencing social norms and\nbehavior among their direct peers and at the school-level. Similarly,\n\\citet{banerjee2013diffusion} study the impact of the (non-randomized)\nchoice of targeted individuals in the diffusion of participation in a\nnew micro-finance loan program in India that invited leaders to an\ninformational meeting and asked them to spread information about the\nloans. The authors develop a model of word-of-mouth diffusion and apply\nit to network data of 43 villages, which was collected by surveying\nhouseholds before the start of the program. The model distinguishes\nbetween information passing (learning about the program from neighbors)\nand endorsement (being influenced by neighbors' adoption of the\nprogram---what we refer to in this chapter as contagion). This allows\none to tease out the likelihood of information passing through\nparticipants as opposed to non-participants, and the marginal\nendorsement effect conditional on being informed. These estimates are\nused to propose measures of individuals' effectiveness as seeding\npoints. A smaller number of studies use experiments designed to compare\nseeding strategies. For example, \\citet{kim2015social} compare the\neffectiveness of three seeding strategies: randomly selected\nindividuals, individuals with the highest number of direct connections,\nand random friends from a nominated set of friends of random individuals\n(one-hop targeting), on take-up rates of a public health program in\nrural municipalities in Honduras; relying on strong parametric and\nindependence assumptions, they find that, for one of two behaviors,\none-hop targeting performs best. A similar strategy is examined by\n\\citet{banerjee2019using}, but with ``ambitious'' questions that ask\nrespondents to select someone who would be good at spreading\ninformation; the results provide suggestive evidence that this strategy\nmay increase spread over random seeding and seeding using village\nleaders. Other field experiments have been conducted for diffusion of\nagricultural knowledge and technology\n\\citep{beaman2018diffusion, beaman2018can}. \\citet{chin2018evaluating}\npresent estimators and optimal experimental designs for studying seeding\nstrategies that make use of at-most partial network information, as do\nstrategies studied by \\citet{kim2015social} and\n\\citet{banerjee2019using}.\n\n\\section{Partial Interference and Marginal Causal\nEffects}\\label{partial-interference-and-marginal-causal-effects}\n\nThe analysis in the preceding section is quite general in that it does\nlittle to restrict the network over which interference can occur. The\nonly restrictions was a ``local interference'' assumption needed for\nasymptotic results to hold. The drawback, however, was that one needed\nto have a specification of the interference networks that was either\ncomplete or that overcompensated for interference in the true data\ngenerating process. In this section, we review the approach of\n\\citet{hudgens2008toward}, who work under the assumption that the\ninterference network per se is unknown, however one can assume that\ninterference is limited to occurring within well-defined and\nnon-overlapping groups, for example, villages or households. The\nassumption that interference does not cross group boundaries is known as\n``partial interference'' \\citep{sobel2006}. If the interference network\nis not known, then we cannot map each assignment vector \\(\\mathbf{z}\\)\nto an exposure, in which case we cannot estimate exposure-specific\neffects. Rather, \\citet{hudgens2008toward} define more agnostic\n``marginal causal effects'' that average over sets of assignment\nprofiles that could, in some unspecified way, generate spillover\neffects. This will be made more precise below. The analysis depends on a\ntwo-stage hierarchical design, where groups are first randomly assigned\nto a level of treatment saturation, and then units within groups are\nrandomly assigned to treatment with probability equal to their group\nsaturation rate.\n\n\\subsection{Marginal Causal Effects}\\label{marginal-causal-effects}\n\nIn the most general case, each treatment assignment \\(\\mathbf{z} \\in \\Omega\\)\ngenerates a distinct potential outcome for unit \\(i\\). Under partial\ninterference, the potential outcome for unit \\(i\\) depends only on the\ntreatment assignments for units in \\(i\\)'s group \\(g\\). For example,\nsuppose that we have six units, \\(i=1,2,3,4,5,6\\), split up into two\ngroups such that group A contains units 1, 2, and 3, and group B\ncontains units 4, 5, and 6. Assuming that unit 2 in group A was assigned\nto treatment, partial interference would imply that unit 1's potential\noutcomes would be the same for all assignment vectors in the set\n\n\\begin{align*}\n\\{(0,1,0,\\mathbf{z}_{i,B}):\\mathbf{z}_{i,B}\\in \\Omega_B \\} = \\{ & (0,1,0,0,0,0), (0,1,0,1,0,0), (0,1,0,0,1,0), (0,1,0,0,0,1), \\\\\n& (0,1,0,1,1,0), (0,1,0,1,0,1), (0,1,0,0,1,1), (0,1,0,1,1,1)\\},\n\\end{align*}\n\nwhere \\(\\Omega_B\\) is the set of possible assignments for group B. At\nthe same time, it may very well be that assigning unit 3 to treatment\ninstead of unit 2 would have different implications for unit 1's\npotential outcomes---i.e., it may be that\n\\(y_{1,A}(0,1,0,\\mathbf{z}_{i,B}) \\ne y_{1,A}(0,0,1,\\mathbf{z}_{i,B})\\).\nMoreover, it would be safe to assume that either of these might differ\nfrom unit 1's outcome if no one in group A were assigned to treatment,\n\\(y_{1,A}(0,0,0,\\mathbf{z}_{i,B})\\), or if both 2 and 3 were assigned to\ntreatment \\(y_{1,A}(0,1,1,\\mathbf{z}_{i,B})\\). Now, suppose three fair\ncoin flips were used to determine whether unit 1, unit 2, or unit 3\nshould be assigned to treatment. Then, there is a 50-50 chance that unit\n1 would be assigned to control. Conditional on unit 1 being assigned to\ncontrol, the expected value of unit 1's outcome would be the average\nover the four potential outcomes \\(y_{1,A}\\) enumerated above. This\nexpected value is unit 1's marginal (i.e., average) potential outcome\ngiven that unit 1 is not treated but under a regime that assigns units\nin 1's group to treatment with 50-50 probability.\n\\citet{hudgens2008toward}'s analysis defines marginal causal effects as\ncontrasts between such marginal potential outcomes.\n\nMore generally, we refer to the individual's marginal potential outcome\nas \\(y_{ig}(z; \\psi)\\) when \\(i\\) is assigned to treatment value \\(z\\)\nand other treatment assignments are determined by an assignment regime\ncharacterized by the parameter \\(\\psi\\), which describes the degree of\ntreatment saturation. In the simple example in the preceding paragraph,\nwe have \\(\\psi = 0.50\\) to describe the regime where each unit is\nassigned to treatment using a Bernoulli draw with \\(p=0.50\\), in which\ncase the expected saturation is 50\\%. Under complete random assignment,\nwhich fixes the number of treated and control units, \\(\\psi\\) could\nindex the share of units assigned to treatment.\n\n\\citet{hudgens2008toward} consider four types of marginal causal\neffects: direct, indirect, total, and overall effects. The direct effect\nfor a particular unit corresponds to the difference between the unit's\npotential outcomes when its treatment assignment changes while the group\ntreatment assignment is kept fixed to a given saturation. Then, the\n\\emph{group average direct causal effect}, under treatment saturation\n\\(\\psi\\) and group size \\(n_g\\) can be defined as\n\\(\\tau^D_g(\\psi) = \\frac{1}{n_g} \\sum_{i = 1}^{n_g} y_{ig}(1; \\psi) - \\frac{1}{n_g} \\sum_{i = 1}^{n_g} y_{ig}(0; \\psi)\\),\nand the \\emph{population average direct causal effect} \\(\\tau^D(\\psi)\\)\n(or simply, the average direct causal effect) is the average of\n\\(\\tau^D_g(\\psi)\\) across groups. The indirect effect describes the\neffect on a unit of the treatment received by others in the group, and\nis obtained from differences across saturation values \\(\\psi\\) and\n\\(\\phi\\). Thus, the \\emph{group average indirect causal effect} is\n\\(\\tau^I_g(\\psi, \\phi) = \\frac{1}{n_g} \\sum_{i = 1}^{n_g} y_{ig}(0; \\psi) - \\frac{1}{n_g} \\sum_{i = 1}^{n_g} y_{ig}(0; \\phi)\\),\nwhile the average indirect causal effect averages across groups. The\ntotal causal effect combines the direct and indirect effects to capture\nthe effect of being directly treated and exposed to the treatment by\nothers in the group. Thus, the \\emph{group average total causal effect}\nis\n\\(\\tau^{To}_g(\\psi,\\phi) = \\frac{1}{n_g} \\sum_{i = 1}^{n_g} y_{ig}(1; \\psi) - \\frac{1}{n_g} \\sum_{i = 1}^{n_g} y_{ig}(0; \\phi)\\),\nand the average total causal effect takes the average of\n\\(\\tau^{To}_g(\\psi,\\phi)\\) across groups. Under no-interference, the\nindirect causal effect is zero and the total causal effect equals the\ndirect causal effect. Finally, the overall causal effect corresponds to\nthe group's response to different treatment saturation levels. The\n\\emph{group average overall causal effect} can be written as\n\\(\\tau^{O}_g(\\psi,\\phi) = \\frac{1}{n_g} \\sum_{i = 1}^{n_g} y_{ig}(\\psi) - \\frac{1}{n_g} \\sum_{i = 1}^{n_g} y_{ig}(\\phi)\\),\nwhere \\(y_{ig}(\\psi)\\) is the average potential outcome for unit \\(i\\)\nin group \\(g\\) under all possible assignments given saturation \\(\\psi\\).\nWe average the \\(\\tau^{O}_g(\\psi,\\phi)\\) across groups to obtain the\naverage overall causal effect.\n\nUnder Bernoulli random assignment, these causal quantities are well\ndefined and have a clean \\emph{ceteris paribus} interpretation. Under\nany other design, the causal interpretation is not always clean. For\nexample, under complete random assignment the individual-level direct\neffect for unit \\(i\\) would contrast outcomes with different numbers of\ngroup members other than \\(i\\) assigned to treatment (i.e.,\n\\(\\psi(n_g-1)\\) when \\(i\\) is assigned to treatment and \\(\\psi(n_g)\\)\nwhen \\(i\\) is assigned to control). This point is discussed by\n\\citet{savje2017average}.\n\nLet the observed outcome be \\(Y_{ig}=y_{ig}(\\mathbf{Z}_{ig})\\) under the group\ntreatment assignment \\(\\mathbf{Z}_{ig}\\) and suppose treatment saturation is\n\\(\\psi\\). Given Bernoulli assignment with probability \\(\\psi\\), an\nunbiased estimator for \\(\\frac{1}{n} \\sum_{i = 1}^{n} y_{ig}(z; \\psi)\\),\nwith \\(z=\\{0,1\\}\\), is\n\\[\\widehat{y}_{g}(z; \\psi) = \\frac{\\sum_{i = 1}^{n} Y_{ig}\\mathbf{I}(Z_{ig}=z)}{\\sum_{i = 1}^{n}\\mathbf{I}(Z_{ig}=z)},\\]\nand for the population average potential outcome, it is\n\\[\\widehat{y}(z; \\psi) = \\frac{\\sum_{g = 1}^{G} \\widehat{y}_{g}(z; \\psi)\\mathbf{I}(S_g = \\psi)}{\\sum_{g = 1}^{G}\\mathbf{I}(S_g = \\psi)},\\]\nwhere \\(S_g\\) is the saturation level of group \\(g\\) (conditionally on\nthe denominators of these estimators being nonzero). Therefore, an\nunbiased estimator for the average direct effect is\n\\(\\widehat{\\tau}^D(\\psi)=\\widehat{y}(1; \\psi) - \\widehat{y}(0; \\psi)\\).\nUnbiased estimators for the other causal estimands of interest are\ndefined analogously:\n\\(\\widehat{\\tau}^I(\\psi, \\phi) = \\widehat{y}(0; \\psi) - \\widehat{y}(0; \\phi)\\),\n\\(\\widehat{\\tau}^{To}(\\psi, \\phi) = \\widehat{y}(1; \\psi) - \\widehat{y}(0; \\phi)\\),\nand\n\\(\\widehat{\\tau}^O(\\psi, \\phi)=\\widehat{y}(\\psi) - \\widehat{y}(\\phi)\\),\nwhere \\(\\widehat{y}(\\psi)\\) is the average across groups assigned to\nsaturation \\(\\psi\\) of the average observed outcomes of units in the\ngroup under treatment assignment \\(\\mathbf{Z}_{ig}\\).\n\nFor inference, \\citet{hudgens2008toward} derive variance estimators\nunder an assumption called stratified interference. This refers to the\nsituation where potential outcomes for unit \\(i\\) in group \\(g\\) do not\nvary on the basis of which other units in group \\(g\\) are assigned to\ntreatment, only in the number or share of other units assigned to\ntreatment. This assumption reduces the problem statistically to the\nusual stratified setting without interference. When stratified\ninterference holds, their proposed variance estimators are unbiased if\nunit causal effects are additive (e.g.,\n\\(y_{ig}(1,\\psi) = y_{ig}(0, \\psi) + d\\), where \\(d\\) is constant), and\notherwise positively biased. \\citet{liu_hudgens2014} discuss conditions\nfor asymptotic normality. \\citet{tchetgen2012causal} extend\n\\citet{hudgens2008toward}'s results, providing conservative variance\nestimators, a framework for finite sample inference with binary\noutcomes, and extensions to observational studies. \\citet{liu2014large}\ndevelop asymptotic results for two-stage designs.\n\nBoth \\citet{sinclair2012detecting} and \\citet{baird2017optimal} discuss\nstatistical power for hierarchical designs. \\citet{baird2017optimal}\noffer thorough consideration of the optimal choice (in terms of\nstatistical power) of saturation levels (\\(\\psi\\) and \\(\\phi\\)) and a\ndistribution of these levels over groups for estimating direct,\nindirect, total, and overall effects. Their methods assume the\npopulation is partitioned into equal-sized non overlapping groups, that\npartial interference and stratified interference hold, and a\nlinear-in-means outcome model. The optimal set of saturations and shares\nof groups assigned to each saturation depends on the correlation of\npotential outcomes within groups.\n\nWe show with a toy example how to compute these estimators and their\nvariance. Suppose we have 18 units equally divided in 6 groups. In the\nfirst stage half of the groups are assigned to saturation \\(\\psi\\) and\nthe other half to \\(\\phi\\) with equal probability. In the second stage,\nusing complete random assignment two-thirds of units in groups with\nsaturation \\(\\psi\\) are assigned to the treatment condition and\none-third to the control condition, and one-third of units in groups\nwith saturation \\(\\phi\\) are assigned to treatment, while two-thirds to\ncontrol. For the purposes of the simulation, we will assume stratified\ninterference and compute potential outcomes under a dilated effects\nscenario such that \\(y_{ig}(1, \\psi)=2 \\times y_{ig}(0, \\phi)\\),\n\\(y_{ig}(1, \\phi)=1.5 \\times y_{ig}(0, \\phi)\\), and\n\\(y_{ig}(0, \\psi)=1.25 \\times y_{ig}(0, \\phi)\\), where\n\\(y_{ig}(0, \\phi)\\) is obtained with a random draw from an absolute\nstandard normal distribution.\n\nWe display the structure of the post-treatment data that the\nexperimenter has to have to compute the estimators. (We show the data of\nsix units only.) In this case, the realized saturation for \\emph{group\n1} is \\(\\psi\\) (the group is assigned to the treatment condition in the\nfirst stage as indicated in column ``group\\_tr'' with value 1, and 2\/3\nof units within the group are treated in the second stage as shown in\ncolumn ``indiv\\_tr'') and for \\emph{group 4} is \\(\\phi\\) (the group is\nassigned to the control condition in the first stage, and therefore 1\/3\nof units are treated in the second stage):\n\n\\begin{Shaded}\n\\begin{Highlighting}[]\n\\NormalTok{post_tr_data[}\\KeywordTok{c}\\NormalTok{(}\\DecValTok{1}\\OperatorTok{:}\\DecValTok{3}\\NormalTok{,}\\DecValTok{10}\\OperatorTok{:}\\DecValTok{12}\\NormalTok{),]}\n\\end{Highlighting}\n\\end{Shaded}\n\n\\begin{verbatim}\n group group_tr indiv_tr obs_outcome\n1 1 1 0 1.9269359\n2 1 1 1 0.2788864\n3 1 1 1 0.9606388\n10 4 0 1 0.9062178\n11 4 0 0 1.0599419\n12 4 0 0 0.6051009\n\\end{verbatim}\n\nThe estimators and their variance are computed under the stratified\ninterference assumption with the following function from the\n\\texttt{interference} package:\n\n\\begin{Shaded}\n\\begin{Highlighting}[]\n\\NormalTok{estimates <-}\\StringTok{ }\\KeywordTok{estimates_hierarchical}\\NormalTok{(post_tr_data)}\n\\NormalTok{causal_effects <-}\\StringTok{ }\\KeywordTok{unlist}\\NormalTok{(estimates[[}\\DecValTok{1}\\NormalTok{]])}\n\\NormalTok{variance <-}\\StringTok{ }\\KeywordTok{unlist}\\NormalTok{(estimates[[}\\DecValTok{2}\\NormalTok{]])}\n\\NormalTok{causal_effects}\n\\end{Highlighting}\n\\end{Shaded}\n\n\\begin{verbatim}\ndirect_psi_hat direct_phi_hat indirect_hat total_hat overall_hat \n 1.0009851 0.2318532 0.2195894 1.2205745 0.8096284 \n\\end{verbatim}\n\n\\begin{Shaded}\n\\begin{Highlighting}[]\n\\NormalTok{variance}\n\\end{Highlighting}\n\\end{Shaded}\n\n\\begin{verbatim}\nvar_direct_psi_hat var_direct_phi_hat var_indirect_hat \n 0.71255260 0.04854851 0.28000088 \n var_total_hat var_overall_hat \n 0.88936810 0.39005445 \n\\end{verbatim}\n\n\\subsection{Misspecifying partial\ninterference}\\label{misspecifying-partial-interference}\n\nWhat happens when the experimenter assumes partial interference but in\nfact there is interference not only within groups, but also across\ngroups? We show that when partial interference does not hold, the\nproposed estimators for the direct, indirect, total and overall causal\neffects are biased.\n\nWe assume a scenario in which groups belong to tracts and interference\nhappens within groups and across groups within tracts, but not across\ntracts. Each tract is composed of two groups. The sample size is 450\nunits, equally divided into six groups. As before, units are assigned to\ntreatment under a two-stage randomized experiment, with treatment\nsaturation set to \\(\\psi=2\/3\\) and baseline saturation to \\(\\phi=1\/3\\).\nWe assume stratified interference at the level of \\emph{tracts} to\ncompute unit potential outcomes. We continue to use a dilated effects\nscenario, but in this case it is the proportion of treated units in the\ntract that matters, as opposed to the proportion of treated units in the\ngroup.\n\nCausal estimands are in this case defined as differences between unit\npotential outcomes, conditional on a tract saturation. For example, the\npopulation average direct causal effect under treatment intensity\n\\(\\psi\\), tract size \\(n_{tr}\\), and \\(Tr\\) tracts is now defined as\n\\(\\tau^D(\\psi\\psi) = \\frac{1}{Tr}\\sum_{t_r = 1}^{Tr}\\left[\\frac{1}{n_{tr}} \\sum_{i = 1}^{n_{tr}} y_{it_r}(1; \\psi\\psi) - \\frac{1}{n_{tr}} \\sum_{i = 1}^{n_{tr}} y_{it_r}(0; \\psi\\psi)\\right]\\).\nThe notation \\(\\tau^D(\\psi\\psi)\\) implies that the realized saturation\nfor unit \\(i\\)'s group and the other group in \\(i\\)'s tract is \\(\\psi\\).\n(In this example, \\(4\/6\\) of the units in tract \\(t_r\\) are assigned to\nthe treatment condition, given that groups and tracts are of equal\nsize.) However, when the interference dependency structure is\nmisspecified, particularly, when partial interference does not hold,\n\\(\\widehat{\\tau}^D(\\psi)\\) is a biased estimator for\n\\(\\tau^D(\\psi\\psi)\\), given that it includes the observed outcomes of\nunits in groups with realized saturation \\(\\psi\\), in tracts where other\ngroups are assigned saturation \\(\\phi\\). In other words, the estimator\nfor the population average potential outcome under saturation \\(\\psi\\)\nis\n\\[\\widehat{y}(z; \\psi)=\\frac{\\sum_{g = 1}^{G} \\widehat{y}_{g}(z; \\psi\\psi)\\mathbf{I}(S_g = \\psi\\psi)}{\\sum_{g = 1}^{G}\\mathbf{I}(S_g = \\psi\\psi)} + \\frac{\\sum_{g = 1}^{G} \\widehat{y}_{g}(z; \\psi\\phi)\\mathbf{I}(S_g = \\psi\\phi)}{\\sum_{g = 1}^{G}\\mathbf{I}(S_g = \\psi\\phi)}.\\]\nThe average outcome across groups includes a mixture of the average of\nobserved outcomes for units with treatment \\(z\\) in a group with\nrealized saturation \\(\\psi\\) in a tract in which the realized saturation\nfor the other group is \\(\\psi\\), and the average of observed outcomes\nfor units with treatment \\(z\\) in a group with realized saturation\n\\(\\psi\\) in a tract in which the realized saturation for the other group\nis \\(\\phi\\).\n\nIn Figure \\ref{fig:miss_hh} we compare the estimators of the direct,\nindirect, total and overall causal effects when the partial interference\nis misspecified (top row) on the basis of the data generating process\ndefined in the preceding paragraphs, and when it is correctly specified\n(bottom row). The estimators are unbiased when partial interference\nholds. However, the estimators are biased when partial interference does\nnot hold, but the experimenter assumes that it does (top row).\n\n\\begin{figure}\n\n{\\centering \\includegraphics{.\/unnamed-chunk-17-1} \n\n}\n\n\\caption{\\label{fig:miss_hh}The plot shows the distribution of point estimates for 3000 simulations with partial interference specified at the level of groups only. In the upper-row the data generating process is with interference within and across groups (general interference) and in the bottom-row within groups only (partial interference). The dashed vertical line represents the value of the estimand.}\\label{fig:unnamed-chunk-17}\n\\end{figure}\n\n\\subsection{Empirical Studies}\\label{empirical-studies-1}\n\n\\citet{sinclair2012detecting} assess peer effects within a large scale\nvoter-mobilization hierarchical experiment (of about 70,000 individuals)\nconducted in Chicago during a special election in 2009, using as\ntreatment social-pressure mailings which are sent shortly before the\nelection and disclose whether a member of the household has voted in\nprior elections. The goal is to estimate total causal effects of\nmailings and indirect causal effects across and within households. In\nthe first stage of randomization, neighborhoods (which are about\nequal-sized with at most 15 households) are assigned to one of four\ndifferent saturations: 100\\% of households in the neighborhood are\ntreated, 50\\%, only one, or zero. In the second stage, households are\nrandomly assigned to treatment according to the neighborhood saturation,\nand exactly one individual within each household is randomly selected to\nreceive the social-pressure message. Therefore, in one-person\nhouseholds, that person has probability one of being treated, and in\ntwo-person and three-person households, each person has a one-half and\none-third probability of receiving the treatment, respectively. The key\nassumption is partial interference at the level of neighborhoods. The\nauthors find positive total causal effects of receiving the message on\nturnout and some evidence of within-household effects but no evidence of\ninterference across households.\n\n\\citet{nickerson2008voting} runs a two-stage placebo-controlled\nexperiment where in the first stage, households with two registered\nvoters are assigned to either a 50\\% saturation, a 0\\% placebo, or a 0\\%\npure control saturation. In households assigned to the 50\\% saturation,\nresidents who answer the door receive a face-to-face get-out-the-vote\nmessage (GOTV condition), whereas in households assigned to the 0\\%\nplacebo, residents who answer get a recycling pitch (recycling\ncondition). The experiment was conducted during the 2002 Congressional\nprimaries in Denver and Minneapolis. Households were contacted the\nweekend before the election. This design identifies the average direct\neffect of mobilization with a difference-in-means estimator between\nobserved voter turnout among reachable voters (those who answer the\ndoor) across the GOTV and recycling conditions. Likewise, the average\nindirect effect is the difference-in-means between observed voter\nturnout among unreachable residents (those who did not answer the door)\nacross the GOTV condition and recycling conditions. The author finds a\nstatistically significant increase in turnout of unreachable residents\nin GOTV households of about 6 percentage points, and of 9.8 percentage\npoints among reachable residents, suggesting that when one voter opens\nthe door to a canvasser, 60\\% of the effect from the get-out-the-vote\nappeal is transmitted to the other household member.\n\nWith the same design, \\citet{bond2018contagion} analyzes the data of a\nface-to-face canvassing experiment by \\citet{broockman2016durably}\nencouraging active perspective taking intended to reduce transphobia.\nThe effects of the perspective taking exercise spill over to household\nresidents who do not answer the door, reducing their anti-transgender\nprejudice. Other empirical studies using a hierarchical design include\n\\citet{duflo2003role}, who analyze spillover effects in individuals'\nchoice of retirement plan, \\citet{miguel2004worms} that study the\neffects of a deworming medical treatment on health and school\nparticipation of untreated children in (partially) treated schools and\nof children in neighboring control schools. Similarly,\n\\citet{angelucci2009indirect} analyze indirect effects of the cash\ntransfer program Progresa on consumption. In this case, only the first\nstage is randomized assigning municipalities to treatment or control,\nwhile in the second stage a subset of individuals are offered treatment\nbased on their income. This design identifies direct and indirect\neffects when there is partial interference, but cannot identify whether\nthese effects vary with intensity of treatment because there is no\nexogenous variation in treatment saturation. (This is also true of\n\\citet{nickerson2008voting}'s and \\citet{duflo2003role}'s design in\nwhich the group-level saturations are fixed at 50\\%.) A similar case is\npresented by \\citet{sur2009cluster}, who study the effectiveness of a\ntyphoid vaccine with a design that randomly assigns geographic clusters\nto receive the vaccine or a placebo vaccine, but individuals\nself-selected into treatment in the second stage. Likewise, the design\nby \\citet{wilke2019placebo} to study the effectiveness of\neducation-entertainment on attitudes towards violence against women,\nteacher absenteeism, and abortion stigma, first assigned villages to\ntreatments and in the second stage individuals self-selected into\ntreatment. \\citet{crepon2013labor} implement a two-stage hierarchical\ndesign in the context of a job placement assistance program in France in\nwhich cities are assigned to either one of four positive saturations or\na control condition, and then job seekers are randomly assigned to\ntreatment according to their city saturation. The program has positive\ndirect effects, but negative spillover effects: the likelihood of\nfinding a stable job for untreated job seekers in positive saturation\ncities is smaller than for untreated job seekers in control areas.\n\n\\citet{bhatti2017voter} combine a hierarchical design with network data\non family ties to assess spillover effects of a GOTV experiment that\nmobilized young voters with text messages during municipal and European\nParliament elections in Denmark, and \\citet{gine2018together} assess\ndirect and indirect effects of a voter awareness campaign on female\nturnout and candidate choice in Pakistan by assigning first geographical\nclusters within villages to either one of two treatments or to control\n(with unequal probabilities because the number of clusters varied by\nvillage) and then randomly targeting a subset of households in treatment\nclusters. \\citet{basse2018analyzing} evaluate an intervention to reduce\nstudent absenteeism in Philadelphia with a two-stage randomization\ndesign in which households with multiple students were first assigned to\ntreatment or control, then exactly one student in treatment households\nwas randomly selected for the student's parents to receive\nstudent-specific information. The authors address the practical problem\npresented to researchers when household size varies (or when the number\nof geographical clusters across villages varies as in\n\\citet{gine2018together}): whether to assign equal weight to individuals\nor to households, depending on what is relevant from a policy\nperspective. The authors propose unbiased estimators for individual and\nhousehold weighted estimands.\n\n\\section{Contagion}\\label{contagion}\n\nBy analogy to biological contagion, (social) contagion refers to\nprocesses through which the outcomes of one unit causally affect the\noutcomes of another unit. Such processes were illustrated above in\nFigure \\ref{fig:interference_dag} as the path \\(Y_1 \\rightarrow Y_2\\).\nIn these narrow terms, contagion is distinct from interference. That\nsaid, contagion can be a mechanism through which interference occurs,\nand one that may call into question assuming a particular exposure\nmapping that limits the extent of interference\n\\citep{manski2013identification, eckles2017design}. Assessing whether\nspillover effects are due to contagion amounts to conducting a mediation\nanalysis, where the mediators are treated units' outcomes\n\\citep{vanderweele2012components, ogburn2017vaccines}. This was\ndisplayed in Figure \\ref{fig:interference_dag} as the path\n\\(Z_1 \\rightarrow Y_1 \\rightarrow Y_2\\), where \\(Y_1\\) is the mediator\nin the contagion process. To evaluate whether spillover effects are due\nto contagion, conditions must hold so as to identify both spillover\neffects (as discussed in this chapter) as well as mediation effects.\nIdentifying mediation effects requires that other types of conditional\nindependence hold, such as sequential ignorabilty\n\\citep{imai-etal2010-mediation, pearl2014}. These are strong assumptions\nabout the data generating process and typically cannot be induced\ndirectly by an experimental design\n\\citep{imai-etal2012-mechanisms-experiments}.\n\n\\citet{imaiidentification} re-analyze the two-stage placebo-controlled\nget-out-the-vote (GOTV) experiment by \\citet{nickerson2008voting}. Their\ngoal is to analyze whether canvassing increases the turnout of the voter\nwho does not answer via effects on the vote intention of the reachable\nvoter (contagion), or via other channels such as conversations within\nthe household (non-contagion spillover). They start with a decomposition\nof the indirect effect into the sum of a \\emph{contagion\neffect}---canvassing influences the turnout of the untreated voter of a\ncontacted household by changing the vote intention of the treated voter\n(approximated by turnout)---and effects due to other mechanisms. With\nreference to the setting of \\citet{nickerson2008voting}, this\ndecomposition is based on the following: there is no spillover across\nhouseholds, reachable voters form a vote intention immediately after\nbeing contacted, where we denote potential vote intention as\n\\(y_{i1}^*(z_i) \\in \\{0,1\\}\\), and the turnout of all voters and all\nhouseholds is observed. Let \\(y_{i1}(z_i)\\) and\n\\(y_{i2}(z_i, y_{i1}^*(z_i))\\) represent the potential voting outcome of\nreachable and unreachable voters in complier household \\(i\\),\nrespectively. Causal quantities of interest are defined only for\ncompliers (that is, residents of households where someone answers the\ndoor). As such, the analysis is limited to households in which someone\nanswered the door either in the canvassing or the placebo (recycling)\nconditions.\n\nThe indirect effect of the GOTV campaign is thus defined as the\ndifference in the potential outcomes of unreachable voters when their\nhousehold is assigned to the GOTV condition (\\(z_i=1\\)) as opposed to a\ncontrol condition (\\(z_i=0\\)). The average indirect effect is obtained\nby taking the average difference across complier households and can be\nexpressed (in terms of a finite population of \\(N_c\\) complier\nhouseholds) as \\[\n\\theta= \\frac{1}{N_c}\\sum_{i = 1}^{N_c} y_{i2}(1, y_{i1}^*(1)) - y_{i2}(0, y_{i1}^*(0)).\n\\] Then, \\(\\theta\\) is decomposed into the sum of the contagion effect\nand the effect of other mechanisms by considering the vote intention of\nthe treated voter \\(y_{i1}^*\\) as the mediator.\n\\citet{imaiidentification} define an ``average contagion effect'' as\nfollows (again, written in terms of a finite population of \\(N_c\\)\ncomplier households): \\[\n\\gamma(z) = \\frac{1}{N_c} \\sum_{i=1}^{N_c} y_{i2}(z, y_{i1}^*(1)) - y_{i2}(z, y_{i1}^*(0)).\n\\] Note that it is possible for \\(y_{i1}^*(1) = y_{i1}^*(0)\\), in which\ncase there would be no contagion effect for household \\(i\\). The\nquantity \\(\\gamma(z)\\) thus aggregates over cases where there could or\ncould not be contagion effects, and then for the former, over the\nmagnitude of any contagion effect. It is therefore analogous to what is\nknown as a ``natural'' effect, rather than a ``controlled'' effect, in\nthe mediation literature \\citep{imai-etal2010-mediation}. Other,\nnon-contagion mechanisms are captured by \\[\n\\eta(z)=\\frac{1}{N_c} \\sum_{i=1}^{N_c} y_{i2}(1, y_{i1}^*(z)) - y_{i2}(0, y_{i1}^*(z)).\n\\] The indirect effect can be decomposed as\n\\(\\theta = \\gamma(1) + \\eta(0) = \\gamma(0) + \\eta(1)\\).\n\nIdentification of this decomposition requires a sequential ignorability\nassumption from mediation analysis. Such assumption implies that\nconditional on treatment status and pre-treatment covariates, the vote\nintention of the treated voter is independent of the potential outcome\nof the unreachable voter. This assumption would be violated in the\npresence of unobserved confounders (such as political efficacy) that\naffect both the vote intention of the reachable voter and turnout of the\nunreachable voter. It is important to note that this assumption is\nstronger than the usual ignorability condition necessary for\nobservational studies because it requires so-called ``cross-world\nindependence assumptions''---i.e., assumptions about potential outcomes\nthat can never be revealed by experimentation. Under this key\nassumption, \\citet{imaiidentification} estimate that indirect effects\ncan be largely explained by contagion, even for households whose treated\nvoter is a Democrat and unreachable voter is a Republican. Because this\nmediation analysis involves such strong assumptions, the authors\nreasonably conduct a sensitivity analysis, examining how robust these\nconclusions are to violations of the sequential ignorability assumption.\n\nOther empirical studies exploring contagion effects include\n\\citet{forastiere2016identification}, who analyze contagion of an\nencouragement program on households' use of bed nets in Zambia. Relying\non semi-parametric outcome models, \\citet{ferrali2018peer} study the\neffects of an encouragement campaign on the adoption of a new political\ncommunication technology in Uganda, and \\citet{vasquez2018excombatants}\nleverages exogenous shocks to analyze contagion effects of criminal\nbehavior among ex-combatants in Colombia. One way analysis of contagion\ncan avoid the strong sequential ignorability assumptions is to instead\nassume complete mediation (i.e., an exclusion restriction of\ninstrumental variables estimation) such that all spillover effects are\ndue to contagion via a particular outcome; for example,\n\\citet{eckles2016estimating} conduct an experiment in which they posit\nthat treatment of an individual's peers only affects them via specific\ndirected behaviors.\n\n\\section{Conclusion}\\label{conclusion}\n\nStandard methods for analyzing experiments assume no interference, which\nassumes that a unit's own treatment status is all that one needs to know\nto characterize its outcome. In many settings, including many of the\nempirical examples discussed in this chapter, such an assumption is\nunwarranted. Such spillovers may merely represent a nuisance for\nestimating quantities of interest. In such cases, experimenters may want\nto introduce adjustments to their designs so as to minimize the\npotential for exposure to other units. Alternatively, experimenters\ncould try working at a higher level of aggregation at which interference\nis less likely to be a concern. On the other hand, researchers may have\na substantive interest in estimating spillover effects. This chapter\npresumed such an interest and proposed methods for doing so.\n\nWe reviewed two analytical frameworks for estimating spillover effects\nin experiments. In the first, the structure of interference is known but\ncan be of almost arbitrary form. In the second, the interference\nstructure is mostly unknown except that the experimenter can be\nconfident that interference is fully contained within non-overlapping\ngroups. We demonstrated how one can work under either framework to\nestimate spillover effects using the \\emph{interference} R package. We\nalso illustrate the implications of specifying the nature and extent of\ninterference.\n\nOur review of empirical studies demonstrates the relevance of spillover\neffects to various social phenomena, such as voting, petitioning,\nstudent behavior, norms against violence, prejudice, economic decisions,\nand subjective well-being. These studies also offer examples of designs\nthat operationalize the analytical frameworks. We hope that the\nanalytical foundation and examples provided can help experimenters to\npush both the methodological and empirical frontiers in our\nunderstanding of spillover effects.\n\n\\renewcommand\\refname{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sect:introduction}\nComputer simulations of ocean dynamics are becoming ever more important to predict the effects of global-scale hazards such as tsunamis \\cite{Hill_etal_2014}, the influence of marine renewable energy turbines on sediment transport \\cite{MartinShort_etal_2015}, and the dispersal range of nuclear contaminants \\cite{Choi_etal_2013}, to name just a few applications. The underlying numerical model behind such simulations often requires a mesh upon which the equations describing the flow dynamics are solved, thereby transitioning from a continuous description of the region of interest (also known as the domain) to a discrete one. An example focussing on the area around the Orkney and Shetland Isles is shown in Figure \\ref{fig:mesh}. A mesh for ocean simulations must be of high enough quality to resolve the intricate coastlines and bathymetry \\cite{Gorman_etal_2008}. However, creating such a mesh manually is infeasible for large-scale, high-resolution simulations.\n\n\\begin{figure}\n \\centering\\includegraphics[width=0.6\\columnwidth]{mesh.png}\n \\caption{An example of an unstructured computational mesh which discretises the marine area around the North-East coast of Scotland. The resolution is highest around the Scottish coastline and around the Orkney and Shetland Isles.}\n \\label{fig:mesh}\n\\end{figure}\n\nGeographical Information Systems (GIS) offer an effective way of processing bathymetry and coastline data to create a geometry with which to work \\cite{Li_2000}. A method of producing a computational mesh from this geometry is then required to perform a simulation on it. QMesh \\cite{Avdis_etal_InPreparation} is a software package currently being developed at Imperial College London for this purpose. QMesh reads in a geometry defined in the QGIS Geographical Information System software \\cite{QGIS_software}, and then converts the geometry into a readable format for the Gmsh mesh generation software \\cite{GeuzaineRemacle_2009}, which in turn generates the mesh to provide a discrete representation of the domain. Ocean simulations may then be performed with a computational fluid dynamics package.\n\nPublications that are dependant on numerical simulation often provide details of the simulation setups to improve reproducibility and indeed recomputability. However, while a description of the domain may also be given, the mesh that discretises this domain is rarely provided as a supplementary material. This lack of data availability has also been highlighted in many other areas of science \\cite{Whitlock_etal_2010}, \\cite{Alsheikh-Ali_etal_2011}, \\cite{Vines_etal_2013}. Furthermore, citations to the software used to produce the mesh typically only refer to a generic user manual and contain no information about which version was used. For the purpose of recomputability and reproducibility, it is crucial that researchers provide \\textit{all} the data files, as well as the precise version of the software's source code used to produce the output in the first place \\cite{deLeeuw_2001}, \\cite{BuckheitDonoho_1995}. In the case of this work, the input data is the geographical information defining the domain, the output data is the computational mesh, and the software is QMesh (and its dependencies).\n\nDespite the need for a more open research environment where software and datasets are shared freely, the level of motivation amongst researchers to do this is generally quite low. This is in part due to the extra effort and time required to gather and publish the data \\cite{LeVeque_etal_2012}, whilst typically gaining little from the process. To encourage the sharing of data and improve its reproducibility and recomputability, it is therefore important to make the publication process more straight forward and swift. This can be effected by the development of research data management tools that readily capture the datasets involved and information about the software being used \\cite{Stodden_etal_2013}, \\cite{LeVeque_etal_2012}.\n\nThis paper describes the integration of a research data management tool, which uses the PyRDM library \\cite{Jacobs_etal_2014a}, into the QMesh software. The tool automates the publication of the QMesh source code, as well as the input and output data for a specified QGIS project, to online, citable and persistent repositories such as those provided by Figshare (figshare.com), Zenodo (zenodo.org) and DSpace-based (dspace.org) hosting services. The tool has both a command line and a graphical user interface, and allows users to publish the software and data at the `push of a button', thereby facilitating sharing and a more open research environment. In contrast to other software tools that also facilitate the publication of code and datasets, such as Fidgit \\cite{Smith_2013}, rfigshare \\cite{Boettiger_etal_2014}, and dvn \\cite{Leeper_2014}, the QMesh publishing tool incorporates application-specific knowledge to provide a greater amount of automation. For example, the tool is able to parse QGIS project files to automatically determine the relevant input data to publish, rather than the user having to specify the data files manually. Furthermore, this work represents a novel application of research data management and curation software within a GIS environment.\n\nSection \\ref{sect:integration} describes in greater detail the extensions made to the QMesh software to automate the publication process for the software itself, the input files (for a given QGIS project) and any output files (i.e. the computational mesh). Section \\ref{sect:example} presents a realistic example of a scientific workflow involving production of a mesh of a UK coastal region. The data files are read in to QGIS and a mesh is produced. Both the QGIS data and mesh are subsequently published to an online repository provided by Figshare, and a DOI is assigned which can be used to properly cite the data in journal articles. Finally, some concluding remarks are made in Section \\ref{sect:conclusions}.\n\n\\section{Integration with QMesh}\\label{sect:integration}\nQMesh features a command line interface (CLI), as well as a graphical user interface (GUI) via a QGIS plugin through which users can select relevant geometry objects and produce a mesh. The integration of research data management techniques into QMesh was achieved by adding a PyRDM-based publishing tool to both of these interfaces.\n\nThe tool provides the option of publishing the QMesh software source code and data required to reproduce the mesh to separate online repositories. Users are presented with a simple interface and only have to provide a minimal amount of information; this is illustrated in Figure \\ref{fig:publish_dialog}. The publication process itself is handled by the PyRDM library \\cite{Jacobs_etal_2014a} which communicates with an online repository hosting service via its Application Programming Interface (API). The publication process results in a Digital Object Identifier (DOI) \\cite{DavidsonDouglas_1998} being assigned to the repository, with which users can properly cite their research outputs.\n\n\\begin{figure}\n \\centering\\includegraphics[width=0.75\\columnwidth]{publish_dialog.png}\n \\caption{The QMesh publisher tool, which is part of the QMesh QGIS plugin. Users choose the online repository service that they wish to use; by default this is set to Figshare. In addition to the input data files associated with the QGIS project, users may also publish the output data file (i.e. the resulting computational mesh) produced by QMesh, if they so desire. By default, the publication is made public unless the user decides otherwise; in the case of private publication, a DOI is still assigned to the repository, but will not be made active\/`live' until the repository is made public.}\n \\label{fig:publish_dialog}\n\\end{figure}\n\nThe publication of data is handled separately to the publication of the QMesh software. In the former case, when a suitable mesh has been produced and is ready to be published, users simply have to provide the QMesh publishing tool with the location of the QGIS project file on the computer's file system when using the CLI. When using the GUI, this location is provided automatically when the project is opened in QGIS. The tool then searches for the \\texttt{} tags in the XML-based project file to determine the location of all the files that the project comprises; these may include shape files that define various layers in the geometry, data files in NetCDF format \\cite{RewDavis_1990} which define the bathymetry of the ocean, and a multitude of other data formats. Optionally, the location of the Gmsh mesh file may also be provided, thereby publishing the resultant output data along with the files required to produce it. The locations of all these data files, including the QGIS project file itself, are then provided to PyRDM which automatically creates a repository on the hosting service and uploads the files via the service's API. The service then returns a publication ID and a DOI, which is presented to the user for citation purposes. This process is illustrated in Figure \\ref{fig:publishing_process}.\n\nThe publication of software involves a similar process, but can currently only be accomplished via the CLI. The user only has to provide the QMesh publishing tool with the location of the software's source code on the computer's file system. The PyRDM library then handles the rest; it determines the exact version of QMesh currently in use using the Git version control system (git-scm.com) \\cite{Ram_2013}, and then checks to see whether that version has been published already\\footnote{Repository searching is only available when using the Figshare repository service, due to API limitations explained later in Section \\ref{sect:conclusions}. PyRDM will publish the software regardless of whether it has been published before when Zenodo or a DSpace-based service is chosen.}. If it has, PyRDM retrieves the existing DOI for re-use. If it has not, then PyRDM publishes the source code in a similar fashion to the case of publishing data, as shown in Figure \\ref{fig:publishing_process}. Note that publications in journals would need to reference both the software repository's DOI and the data repository's DOI. There is currently no explicit link that is made between the software and data repositories, unless specified manually.\n\n\\begin{figure}\n \\centering\\includegraphics[width=0.44\\columnwidth]{data.jpg}\n \\hspace*{10mm}\\centering\\includegraphics[width=0.44\\columnwidth]{software.jpg}\n \\caption{The processes behind publishing the QGIS data files (left) and QMesh software source code (right) to Figshare.}\n \\label{fig:publishing_process}\n\\end{figure}\n\nAs demonstrated by Figure \\ref{fig:publishing_process}, the QMesh publishing tool requires minimal user interaction and is largely automated by the PyRDM library. This is important for encouraging the sharing of software and data files, in order to achieve a more open research environment.\n\n\\section{Workflow Example}\\label{sect:example}\nTo demonstrate an example of a scientific workflow involving mesh generation using GIS, the Orkney and Shetland Isles considered in \\cite{Avdis_etal_InPreparation} and \\cite{Avdis_etal_Accepted} are used. The researcher first has to describe the geography of the domain in QGIS and then decide on the area they wish to create a mesh for. The QGIS project for the Orkney and Shetland Isles comprises a number of geometrical layers which define the coastlines (and potentially coastal engineering structures such as marine power turbines), in addition to a NetCDF file which defines the bathymetry of the ocean floor, and another NetCDF file which defines the desired resolution throughout the mesh. These files are shown in Figure \\ref{fig:qgis_geometry} beside the area that will be meshed.\n\n\\begin{figure}\n \\centering\\includegraphics[width=1\\columnwidth]{qgis_geometry.png}\n \\caption{Screenshot of the UK region visualised in QGIS. The solid dark purple line defines the area that will be meshed (in this case it contains the Orkney and Shetland Isles). The different files that make up the layers of the geometry are specified in the column on the left-hand side.}\n \\label{fig:qgis_geometry}\n\\end{figure}\n\nThe mesh that QMesh produces for this domain (shown in Figure \\ref{fig:mesh}) is then used by the researcher in their marine simulations. Once the researcher is ready to publish their results, they upload the data files associated with the production of the simulation's mesh to an online repository using the QMesh publishing tool shown in Figure \\ref{fig:publish_dialog} (the CLI may also be used instead of the graphical interface). In this example, it uploads all the files previously mentioned to Figshare. Once uploaded, the files can be downloaded from the Figshare website (see Figure \\ref{fig:figshare_files}) and a DOI is presented to the researcher to share with colleagues and for use in journal publications (see Figure \\ref{fig:doi}).\n\n\\begin{figure}\n \\centering\\includegraphics[width=1\\columnwidth]{figshare_files.png}\n \\caption{A screenshot of the resulting repository on the Figshare website, with the files readily available to download. The QMesh publishing tool automatically assigns a title and tags to the repository based on the QGIS project's name.}\n \\label{fig:figshare_files}\n\\end{figure}\n\n\\begin{figure}\n \\centering\\includegraphics[width=0.75\\columnwidth]{doi.png}\n \\caption{A Figshare publication ID and a DOI are assigned to each repository, and presented to the researcher once the publication process is complete.}\n \\label{fig:doi}\n\\end{figure}\n\nThe version of the QMesh software's source code that is used should also be published, in a separate repository to the data. However, it should be noted that publishing the QMesh source code may not be enough to reproduce the exact same mesh without also knowing the versions of its dependencies. For example, different versions of Gmsh may produce slightly different meshes as a result of algorithmic improvements within the software. It is therefore important that such information be recorded in some way to further improve the degree of reproducibility. For example, ideally Gmsh would also have a similar system for publishing the current version of its source code in use.\n\n\\section{Discussion and Conclusions}\\label{sect:conclusions}\nThroughout the production of the PyRDM-based publishing tool for QMesh, several issues were encountered which largely stemmed from a lack of standardisation and support in the repository hosting services' APIs. For example, in order for PyRDM to attribute authors to the software repository on Figshare, all authors of QMesh must provide their Figshare author IDs in the \\texttt{AUTHORS} file that is part of the QMesh source code. Unfortunately, another different set of author IDs would need to be provided when using a different repository service such as Zenodo, which is inconvenient and requires all authors of QMesh to have accounts across all the supported services. A more standardised way of identifying and attributing authors to research software and data would be to use ORCID (orcid.org) researcher IDs. Figshare has recently added support for authenticating with ORCID IDs via its web interface \\cite{Figshare_2013}, and it is hoped that ORCID authentication via the Figshare API will also be added for the benefit of PyRDM. Another example, this time involving lack of API support, is the current inability to search for an existing repository with the Zenodo API. Further developments are necessary in this area to enrich the publication process and improve automation.\n\nThe production of meshes can involve proprietary and\/or private data which cannot be published openly, but at the same time sharing all research output is becoming a common requirement imposed by research funders. The QMesh publishing tool comes with the option of publishing the data to private repositories. However, with some services the private storage space is rather limited, and typically not large enough to store high quality mesh files for realistic ocean simulations. For example, the free private storage space offered by Figshare is 1 GB at the time of writing this paper, with a 250 MB individual file size limit\\footnote{\\texttt{http:\/\/figshare.com\/pricing}}. Furthermore, only a maximum of 5 collaborators can be given access to a private repository. In contrast, the integration of Figshare for Institutions \\cite{Figshare_2014} offers a more suitable platform for larger-scale research data management. This project enables researchers at an institution to publish to private repositories hosted in the cloud. This is considerably more sustainable for GIS projects and mesh generation that can involve very large file sizes, both public and private data, and collaboration amongst many researchers and research groups.\n\nIn conclusion, the integration of a publishing tool in a Geographical Information System has helped to mitigate one of the reasons why researchers tend not to publish their software and data; that is, it is time-consuming to do so with little reward. The new QMesh publishing tool makes publishing a computational mesh and associated data files easy and largely effortless through the addition of a significant amount of automation. Furthermore, the use of online repository services enable more formal citation of all research outputs through the use of DOIs. However, it is the responsibility of the scientific community to encourage and provide incentives for the openness and public availability of this software and data, in order to overcome the barrier of lack of motivation to publish.\n\n\\subsubsection*{Acknowledgments}\nCTJ was funded by an internal grant entitled ``Research data management: Where software meets data'' from the Research Office at Imperial College London. Part of the work presented in this paper is based on work first presented in poster form at the International Digital Curation Conference (IDCC) in February 2015 \\cite{Jacobs_etal_2015}, and in a PyRDM project report \\cite{Jacobs_etal_2014b}. The authors would like to thank the two anonymous reviewers of this paper for their feedback.\n\n\\bibliographystyle{splncs03}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nThe idea of ``sparsifying a graph'' (i.\\,e., reducing the number of\nedges) while preserving the value of all cuts goes back\nto the influential paper~\\cite{BK96}. \nThe original motivation was to speed up algorithms for\ncut problems and graph problems more generally. This concept turned out to be very\ninfluential, with several generalisations and extensions\nfrom graph cuts~\\cite{Batson12:sicomp,BK15:sicomp,Andoni16:itcs}\nto sketching~\\cite{Ahn09:icalp,Andoni16:itcs}, sparsifiers for cuts in\nhypergraphs~\\cite{Kogan15:itcs,Newman13:sicomp}, spectral\nsparsification~\\cite{Spielman11:sicomp,Spielman04:stoc,Spielman11:sicomp-graph,Fung11:stoc,Lee18:sicomp,Soma19:soda,Kapralov21:focs}, sparsification of other predicates~\\cite{Filtser17:sidma}, and additive\nsparsification~\\cite{Bansal19:focs}.\n\nThe cut function of a graph is an important example of a submodular function, which we define now.\nLet $E$ be a finite set. A (set) function $F:2^E\\to\\ensuremath{\\mathbb{R}}$ defined on subsets of\n$E$ is called \\emph{submodular} if\n\\begin{equation}\\label{ineq:submodular}\n F(S\\cap T)+F(S\\cup T)\\ \\leq\\ F(S)+F(T)\\qquad \\forall S,T\\subseteq E.\n\\end{equation}\nSubmodularity is a fundamental concept in combinatorial optimisation, with\napplications across computer science and\neconomics~\\cite{Nemhauser88:optimization,Topkis98:Supermodularity,schrijver2003combinatorial,Fujishige2005submodular}.\nAn equivalent definition of submodular functions captures the idea of\n\\emph{diminishing returns}.\n\\begin{equation}\\label{ineq:submodular2}\n F(T\\cup\\{e\\})-F(T)\\ \\leq\\ F(S\\cup\\{e\\})-F(S)\\qquad \\forall S\\subseteq T\\subseteq E, e\\in E\\setminus T.\n\\end{equation}\n\n\nA set function $F$ is \\emph{decomposable} if $F = \\sum_{i\n= 1}^N f_i$, where $f_i: 2^E \\to \\ensuremath{\\mathbb{R}}$ for each $i\\in [N]=\\{1,\\ldots,N\\}$ and\n$E$ is a finite set of size $n=|E|$.\nThe cut function in a graph is an example of a decomposable submodular function,\nin which the number $N$ of individual functions is equal to the number of edges\nin the graph. (This is true even if the graph is directed and with nonnegative edge\nweights.)\n\nRafiey and Yoshida~\\cite{Rafiey22:aaai} considered the following natural\nsparsification problem for $F$.\\footnote{Each $f_i$ is represented by an oracle that returns, for any $S\\subseteq E$, the value $f_i(S)$.}\nGiven tolerance parameters $\\varepsilon, \\delta \\in (0,\n1)$, find a vector $w\\in\\ensuremath{\\mathbb{R}}^N$, called an $\\varepsilon$-\\emph{sparsifier} (or just a\n\\emph{sparsifier}), such that the\nfunction $F'=\\sum_{i=1}^Nw_if_i$ satisfies, with probability at least\n$1-\\delta$,\n\\begin{equation}\n (1-\\varepsilon)F'(S)\\ \\leq\\ F(S)\\ \\leq\\ (1+\\varepsilon)F'(S)\\qquad \\forall S\\subseteq E,\n\\end{equation}\nand $\\size{w}$, the set of nonzero entries of $w$, is as small as possible.\nThe idea in~\\cite{Rafiey22:aaai} is, for each $i\\in [N]$, to sample function\n$f_i$ with probability $\\kappa_i$ proportional to the ratio\n\\begin{equation}\n p_i = \\max_{\\substack{S\\subseteq E\\\\F(S) \\neq 0}} \\frac{f_i(S)}{F(S)}.\n\\end{equation}\nIf $f_i$ is sampled, i.\\,e., if it is decided that $f_i$ shall be part of the sparsifier, it is\nincluded in the sparsifier with weight $1 \/ \\kappa_i$, making its expected\nweight equal to $\\Ex{w_i} = \\kappa \\cdot 1 \/ \\kappa_i = 1$ -- its weight in the\ninitial decomposition. In statistical terms, the sampling procedure is \\emph{unbiased}.\nThe authors of~\\cite{Rafiey22:aaai} showed the following.\n\n\\begin{theorem}[\\cite{Rafiey22:aaai}]\n \\label{thm:core-old}\n Let $F=\\sum_{i=1}^N f_i$, where each $f_i:2^E\\to\\ensuremath{\\mathbb{R}}$ is submodular. For every $\\varepsilon,\\delta\\in(0,1)$ there is a vector $w \\in \\ensuremath{\\mathbb{R}}^N$ such that\n \\begin{enumerate}[label = (\\roman*)]\n \\item $\\Pr{\\text{\\normalfont $w$ is an $\\varepsilon$-sparsifier}} \\ge 1 - \\delta$;\n \\item $\\Ex{\\size{w}} = \\ensuremath{\\mathcal{O}}\\left(\\frac{n}{\\varepsilon^2} \\sum_{i = 1}^N p_i \\right)$,\n where $p_i = \\max_{\\substack{S \\subseteq E\\\\F(S) \\neq 0}} \\frac{f_i(S)}{F(S)}$.\n \\end{enumerate}\n\\end{theorem}\n\nComputing and in many interesting cases even approximating the $p_i$'s is by far\nthe hardest step on the way to constructing a sparsifier. We shall refer to the\n$p_i$'s as the \\emph{peak contributions} -- since $p_i$ describes, on a scale\nfrom $0$ to $1$, the maximum contribution of $f_i$ to $F$ when a set $S \\subseteq E$ is chosen in favour of $f_i$.\n\nLet $F=\\sum_{i=1}^Nf_i$ be as in Theorem~\\ref{thm:core-old}, i.\\,e., with all\n$f_i$'s submodular. \nLet $|\\ensuremath{\\mathsf{EX}}(\\ensuremath{\\mathcal{B}}(f_i))|$ be the number of extreme points in the base\npolyhedron of $f_i$~\\cite{Fujishige2005submodular}, and let\n$B=\\max_{i\\in[N]}|\\ensuremath{\\mathsf{EX}}(\\ensuremath{\\mathcal{B}}(f_i))|$ (cf. Appendix~\\ref{subsec:prelims} for precise\ndefinitions).\nThe authors of~\\cite{Rafiey22:aaai} claim that\n\\begin{equation}\\label{ineq:peaks}\n \\sum_{i=1}^Np_i\\ \\leq\\ Bn,\n\\end{equation}\nwhich implies by the virtue of Theorem~\\ref{thm:core-old} the existence of a\nsparsifier of expected size $\\ensuremath{\\mathcal{O}}(\\frac{Bn^2}{\\varepsilon^2})$.\nAs we will see later, this only holds if the $f_i$'s are monotone.\nUsing an $\\ensuremath{\\mathcal{O}}(\\sqrt{n})$-approximation of the peak contributions using the\nellipsoid method~\\cite{Bai16:icml}, it is then established in \\cite{Rafiey22:aaai} that if\nall $f_i$'s are not only submodular but also monotone, a sparsifier of expected size \n$\\ensuremath{\\mathcal{O}}(\\frac{Bn^{2.5}\\log n}{\\varepsilon^2})$ can be found in randomised polynomial time,\nassuming $(\\ref{ineq:peaks})$ holds.\nHere a function $F: 2^E\\to\\ensuremath{\\mathbb{R}}$ is called \\emph{monotone} if $F(S) \\leq F(T)$ for any $S\\subseteq\nT\\subseteq E$.\n\n\\noindent\\paragraph{Contributions}\n\nAs our main contribution, we provide a sparsification algorithm for decomposable monotone\n$k$-submodular functions of low curvature.\nAs a starting point, we observe in Section~\\ref{sec:core} (and prove in\nAppendix~\\ref{app:core-proofs}) that the sampling algorithm\nfrom~\\cite{Rafiey22:aaai} used to prove Theorem~\\ref{thm:core-old} is largely independent\nof submodularity, leading to a more general sparsification algorithm for\ndecomposable functions. \nAlong the way, we establish a concentration bound\nrevealing that it is very unlikely that the resulting sparsifier exceeds\n$(3\/2)$-times the expected size.\nIn detail, consider a finite domain $\\ensuremath{\\mathcal{D}}$, which is the power set $\\ensuremath{\\mathcal{D}} = 2^E$ in the case of set\nfunctions. Further suppose that $F: \\ensuremath{\\mathcal{D}} \\to \\ensuremath{\\mathbb{R}}$ is decomposable as $F = \\sum_{i =\n1}^N f_i$, where $f_i: \\ensuremath{\\mathcal{D}} \\to \\ensuremath{\\mathbb{R}}$ for each $i\\in [N]$.\\footnote{Each $f_i$ is\nrepresented by an evaluation oracle that takes time $\\ensuremath{\\mathcal{O}}(\\text{EO}_i)$ to return\n$f_i(S)$ for any $S\\in\\ensuremath{\\mathcal{D}}$.}\n\n\\begin{theorem}[Informal version of Theorem~\\ref{thm:core}]\n \\label{thm:core-informal}\n Let $F=\\sum_{i=1}^N f_i$, where $f_i:\\ensuremath{\\mathcal{D}}\\to\\ensuremath{\\mathbb{R}}$. For every $\\varepsilon,\\delta\\in(0,1)$ there is a vector $w \\in \\ensuremath{\\mathbb{R}}^N$ such that\n \\begin{enumerate}[label = (\\roman*)]\n \\item \n $\\Pr{\\text{\\normalfont $w$ is an $\\varepsilon$-sparsifier}} \\ge 1 - \\delta$;\n \\item \n $\\Ex{\\size{w}} = \\ensuremath{\\mathcal{O}}\\left(\\frac{\\log{|\\ensuremath{\\mathcal{D}}|} + \\log{\\frac{1}{\\delta}}}{\\varepsilon^2} \\sum_{i = 1}^N p_i \\right)$,\n where $p_i = \\max_{\\substack{A \\in \\ensuremath{\\mathcal{D}}\\\\F(A) \\neq 0}} \\frac{f_i(A)}{F(A)}$;\n \\item \n $\\Pr{\\size{w} \\le \\frac{3}{2} \\Ex{\\size{w}}} \\ge 1 - 4\\varepsilon^2$.\n \\end{enumerate}\n\\end{theorem}\n\nAs our primary contribution, we use Theorem~\\ref{thm:core-informal} to give in Section~\\ref{sec:curvature} a sparsifier for decomposable monotone $k$-submodular functions of low curvature. \nAs our secondary contribution, we clarify certain results on sparsification of submodular functions from~\\cite{Rafiey22:aaai}. \nFirstly, we show \nthat Inequality~(\\ref{ineq:peaks}) claimed in~\\cite{Rafiey22:aaai} is\nincorrect by giving a counterexample. However, we show that Inequality~(\\ref{ineq:peaks}) holds\nunder the additional assumption of monotonicity. This is done in\nAppendix~\\ref{app:submodular}.\nSecondly, in Appendix~\\ref{app:bounded-arity}\nwe give a sparsifier for a class\nof decomposable monotone submodular functions of bounded arity.\n\n\\medskip\nFor a natural number $k\\geq 1$, a function $F:(k+1)^E\\to\\ensuremath{\\mathbb{R}}$ defined on $k$-tuples of\npairwise disjoints subsets of $E$ is called $k$-submodular if $f$ satisfies inequalities\nsimilar to the submodularity inequality given in\nInequality~(\\ref{ineq:submodular}). \nIn detail, let $\\ensuremath{\\mathbf{A}}=(A_1,\\ldots,A_k)\\in(k+1)^E$ be a $k$-tuple of pairwise disjoint\nsubsets of $E$, and similarly for $\\ensuremath{\\mathbf{B}}=(B_1,\\ldots,B_k)\\in(k+1)^E$. Then,\n$F:(k+1)^E\\to\\ensuremath{\\mathbb{R}}$ is called \\emph{$k$-submodular} if\n\\begin{equation}\nf(\\ensuremath{\\mathbf{A}}\\sqcap \\ensuremath{\\mathbf{B}})+f(\\ensuremath{\\mathbf{A}}\\sqcup \\ensuremath{\\mathbf{B}}) \\ \\leq\\ f(\\ensuremath{\\mathbf{A}})+f(\\ensuremath{\\mathbf{B}}),\n\\end{equation}\nwhere \n\\begin{align}\n\\ensuremath{\\mathbf{A}}\\sqcap \\ensuremath{\\mathbf{B}}\\ &=\\ (A_1\\cap B_1,\\ldots, A_k\\cap B_k), \\\\\n\\intertext{and}\n\\ensuremath{\\mathbf{A}}\\sqcup \\ensuremath{\\mathbf{B}}\\ &=\\ ((A_1\\cup B_1)\\setminus\\smashoperator{\\bigcup_{i\\in\\{2,\\ldots,k\\}}}\\,(A_i\\cup B_i),\\ldots,(A_k\\cup B_k)\\setminus\\smashoperator{\\bigcup_{i\\in\\{1,\\ldots,k-1\\}}}\\,(A_i\\cup B_i)).\n\\end{align}\nUnder this definition, $1$-submodularity corresponds exactly to the standard\nnotion of submodularity for set functions as defined in\nInequality~(\\ref{ineq:submodular}), and\nsimilarly $2$-submodularity corresponds to \\emph{bisubmodularity}~\\cite{Bouchet87:greedy,Chandrasekaran88:pseudomatroids}.\nThe class of $k$-submodular functions was introduced in~\\cite{Huber12:ksub} and played an\nimportant role in the study of so-called finite-valued CSPs~\\cite{HKP14:sicomp,KTZ15:sicomp}.\nWhile minimising 1-submodular functions~\\cite{Schrijver00:submodular,Iwata01:submodular}\nand 2-submodular functions~\\cite{Fujishige06:bisubmodular} given by evaluation\noracles can be done efficiently~\\cite{Iwata08:sfm-survey}, the\ncomplexity of the minimisation problem of $k$-submodular functions is open for\n$k\\geq 3$. On the other hand, the approximability of the\nmaximisation problem is well\nunderstood for $k$-submodular functions~\\cite{WZ16:talg,Iwata16:soda,Oshima21:sidma}, also for the monotone\ncase under cardinality constraint~\\cite{Sakaue17:do}, in the streaming\nmodel~\\cite{Ene22:icml}, and other variants~\\cite{Tang22:orl,Tang22:tcs,Pham22:jco}.\n\nThe definition of monotonicity for submodular functions gracefully extends\nto $k$-submodular functions: $F:(k+1)^E\\to\\ensuremath{\\mathbb{R}}$ is called \\emph{monotone} if\n$F(\\ensuremath{\\mathbf{A}})\\leq F(\\ensuremath{\\mathbf{B}})$ for all $\\ensuremath{\\mathbf{A}}=(A_1,\\ldots,A_k)\\in(k+1)^E$ and\n$\\ensuremath{\\mathbf{B}}=(B_1,\\ldots,B_k)\\in(k+1)^E$ with $A_i\\subseteq B_i$ for every $i\\in[k]$.\n\nAn important concept studied in the context of submodular functions is that of\n\\emph{bounded curvature}~\\cite{Conforti84:dam}.\nFor a monotone submodular function $F:2^E\\to\\ensuremath{\\mathbb{R}_{\\ge 0}}$, the\n\\emph{curvature} (also called \\emph{total curvature} in~\\cite{Vondrak10}) $c_F$ of $F$\nis defined by\n\\begin{equation}\n c_F\\ =\\ 1-\\min_{S\\subseteq E,e\\in E\\setminus S}\\frac{\\Delta_e\n F(S)}{\\Delta_e F(\\emptyset)},\n\\end{equation}\nwhere $\\Delta_e f(S)$ denotes the marginal gain of $e$ with respect to $S$,\ni.\\,e., \n\\begin{equation}\n \\Delta_e F(S)\\ =\\ F(S\\cup\\{e\\})-F(S).\n\\end{equation}\nIn other words, the curvature compares the marginal gain of adding an element\nof the ground set to any set and the empty set. Note that $c_F\\in [0,1]$, with the\nupper bound following from Inequality~(\\ref{ineq:submodular2}). Also, $c_F=0$ holds\nprecisely when $f$ is modular, i.\\,e., when Inequality~(\\ref{ineq:submodular})\n(equivalently, Inequality~(\\ref{ineq:submodular2}))\nholds with equality. We say that $f$ has \\emph{low curvature} if $c_F<1$.\nIntuitively, the curvature $c_F$ represents ``how\nmuch the function curves''. The notion of curvature was extended from submodular\nto $k$-submodular functions in~\\cite{SantiagoS19:icml}, cf. also~\\cite{ksub-curvature-def}.\nIn order to define it, we first need to introduce the notion of\nmarginal values for $k$-submodular functions, which is a natural generalisation\nof the $k=1$ case. Let $F:(k+1)^E\\to\\ensuremath{\\mathbb{R}}$ be a $k$-submodular function. For \n$\\ensuremath{\\mathbf{A}}=(A_1,\\ldots,A_k)$, $i\\in [k]$, and $e\\in E\\setminus\\cup_{j\\in [k]}A_j$, we\ndefine the marginal gain of $e$ with respect to $\\ensuremath{\\mathbf{A}}$ and $i$ as\n\\begin{equation}\n \\Delta_{e,i} F(\\ensuremath{\\mathbf{A}})\\ =\\\n F(A_1,\\ldots,A_{i-1},A_i\\cup\\{e\\},A_{i+1},\\ldots,A_k)-F(\\ensuremath{\\mathbf{A}}).\n\\end{equation}\nThen, the curvature $c_F$ of $F$ is defined as\n\\begin{equation}\n c_F\\ =\\ 1-\\min_{i\\in [k], e\\in E, \\ensuremath{\\mathbf{A}}\\in(k+1)^{E\\setminus\\{e\\}}} \n \\frac{\\Delta_{e,i} F(\\ensuremath{\\mathbf{A}})}{\\Delta_{e,i} F(\\emptyset)}.\n\\end{equation}\nAs before, we say that $f$ has \\emph{low curvature} if $c_F<1$.\n\nAs our main contribution, we will show that under the assumption of monotonicity\nand low curvature one can efficiently approximate the peak contributions,\nleading to an efficient execution of the sampling algorithm from\nSection~\\ref{sec:core}. Apart from being technically non-trivial, we also see\nour work as a conceptual contribution to the area of sparsification by exploring\nmore general settings than previous works.\n\n\\section{The Core Algorithm}\n\\label{sec:core}\n\nIn this section, we will describe the core of all our sparsification algorithms\n-- a randomised sampling routine initially described by Rafiey and\nYoshida~\\cite{Rafiey22:aaai} for decomposable submodular functions. As alluded\nto in Section~\\ref{sec:intro}, we observe\nthat it is largely independent of submodularity, leading to a more general\nsparsification algorithm for decomposable functions. \nMost of the presented material follows closely Section 3\nin~\\cite{Rafiey22:aaai} and details are deferred to Appendix~\\ref{app:core-proofs}.\n\nThe algorithm we present here constructs an $\\varepsilon$-sparsifier for any\ndecomposable function $F = \\sum_{i = 1}^N f_i: \\ensuremath{\\mathcal{D}} \\to \\ensuremath{\\mathbb{R}}$\nprobabilistically.\nAs in~\\cite{Rafiey22:aaai}, it relies on sampling functions with probabilities proportional to the ratios,\nfor each $i \\in [N]$,\n\\begin{align}\n p_i = \\max_{\\substack{A \\in \\ensuremath{\\mathcal{D}}\\\\F(A) \\neq 0}} \\frac{f_i(A)}{F(A)}.\n\\end{align}\n\nThe procedure with all details is given in~\\ralg{core}.\n\n\\begin{algorithm}[htbp]\n \\begin{algorithmic}[1]\n \\Require Function $F = f_1 + \\dots + f_N$ with $f_i: \\ensuremath{\\mathcal{D}} \\to\n \\ensuremath{\\mathbb{R}}$ given by evaluation oracles;\n error tolerance parameters $\\varepsilon, \\delta \\in (0, 1)$\n \\Ensure Vector $w \\in \\ensuremath{\\mathbb{R}}^N$ such that\n \\begin{itemize}\n \\item $\\Pr{\\text{$w$ is an $\\varepsilon$-sparsifier}} \\ge 1 - \\delta$;\n \\item $\\Ex{\\size{w}} = \\ensuremath{\\mathcal{O}}\\left(\\frac{\\log{|\\ensuremath{\\mathcal{D}}|} + \\log{\\frac{1}{\\delta}}}{\\varepsilon^2} \\sum_{i = 1}^N p_i \\right)$,\n where $p_i = \\max_{\\substack{A \\in \\ensuremath{\\mathcal{D}}\\\\F(A) \\neq 0}} \\frac{f_i(A)}{F(A)}$;\n \\item $\\Pr{\\size{w} \\le \\frac{3}{2} \\Ex{\\size{w}}} \\ge 1 - 4\\varepsilon^2$.\n \\end{itemize}\n \\State $w \\gets (0, \\dots, 0)$\n \\State $\\kappa \\gets 3 \\log{\\left( \\frac{2 |\\ensuremath{\\mathcal{D}}|}{\\delta} \\right)} \/ \\varepsilon^2$\n \\For{$i = 1, \\dots, N$}\n \\State \\label{alg:core:pi} $p_i \\gets \\max_{\\substack{A \\in \\ensuremath{\\mathcal{D}}\\\\F(A) \\neq 0}} \\frac{f_i(A)}{F(A)}$ \\Comment{compute peak contribution (here: naively)}\n \\State $\\kappa_i \\gets \\min\\{ 1, \\kappa p_i \\}$ \\Comment{cap at $1$ as $\\kappa_i$ is a probability}\n \\State $w_i \\gets \\begin{cases}\n 1\/\\kappa_i & \\text{ with probability } \\kappa_i \\\\\n 0 & \\text{ with probability } 1 - \\kappa_i\n \\end{cases}$ \\Comment{sample weight of $f_i$}\n \\EndFor\n \\State \\Return $w$\n \\end{algorithmic}\n \\caption{The Core Sparsification Algorithm}\n \\label{alg:core}\n\\end{algorithm}\n\n\\begin{theorem}\\label{thm:core}\n \\ralg{core} outputs a vector $w \\in \\ensuremath{\\mathbb{R}}^N$ such that\n \\begin{enumerate}[label = (\\roman*)]\n \\item\n $\\Pr{\\text{\\normalfont $w$ is an $\\varepsilon$-sparsifier}} \\ge 1 - \\delta$;\n \\item \n $\\Ex{\\size{w}} = \\ensuremath{\\mathcal{O}}\\left(\\frac{\\log{|\\ensuremath{\\mathcal{D}}|} + \\log{\\frac{1}{\\delta}}}{\\varepsilon^2} \\sum_{i = 1}^N p_i \\right)$,\n where $p_i = \\max_{\\substack{A \\in \\ensuremath{\\mathcal{D}}\\\\F(A) \\neq 0}} \\frac{f_i(A)}{F(A)}$;\n \\item \\label{thm:core:size:variance0} \n $\\Pr{\\size{w} \\le \\frac{3}{2} \\Ex{\\size{w}}} \\ge 1 - 4\\varepsilon^2$.\n \\end{enumerate}\n\\end{theorem}\n\n\\begin{remark}\n \\ralg{core} can be invoked with $\\delta = \\ensuremath{\\mathcal{O}}(1 \/ n^c)$ so that it yields an $\\varepsilon$-sparsifier with high probability.\n This only influences the running time by a constant factor $c$ because of the dependence on $\\log{\\frac{1}{\\delta}}$.\n\\end{remark}\n\n\\begin{remark}\n If the size of the sparsifier is of primary interest, running~\\ralg{core} a\n couple of times and taking the smallest vector $w$ (with respect to $\\size{w}$)\n leads to a procedure that, for any fixed $\\varepsilon > 0$, returns a sparsifier of size\n $\\ensuremath{\\mathcal{O}}\\left(\\frac{\\log{|\\ensuremath{\\mathcal{D}}|} + \\log{\\frac{1}{\\delta}}}{\\varepsilon^2} \\sum_{i = 1}^N p_i \\right)$\n after a logarithmic number of iterations. This is a consequence of Theorem~\\ref{thm:core}\\,\\ref{thm:core:size:variance0}.\n Notice that it might be necessary to choose $\\delta$ appropriately to also guarantee\n that the solution indeed is an $\\varepsilon$-sparsifier with high probability.\n\\end{remark}\n\n\\begin{corollary}\n \\label{cor:core}\n In the setting of~\\ralg{core}, let $\\widehat{p}_1, \\dots, \\widehat{p}_N \\in \\ensuremath{\\mathbb{R}_{\\ge 0}}$\n satisfy $\\widehat{p}_i \\ge p_i$ for all $i \\in [N]$.\n If~\\ralg{core} is executed with the $\\widehat{p}_i$'s instead of $p_i = \\max_{\\substack{A \\in \\ensuremath{\\mathcal{D}}\\\\F(A)\\neq 0}} \\frac{f_i(A)}{F(A)}$\n in line~\\ref{alg:core:pi}, it returns a vector $w \\in \\ensuremath{\\mathbb{R}}^N$ such that\n \\begin{enumerate}[label = (\\roman*)]\n \\item $\\Pr{\\text{\\normalfont $w$ is an $\\varepsilon$-sparsifier}} \\ge 1 - \\delta$;\n \\item $\\Ex{\\size{w}} = \\ensuremath{\\mathcal{O}}\\left(\\frac{\\log{|\\ensuremath{\\mathcal{D}}|} + \\log{\\frac{1}{\\delta}}}{\\varepsilon^2} \\sum_{i = 1}^N \\widehat{p}_i \\right)$;\n \\item $\\Pr{\\size{w} \\le \\frac{3}{2} \\Ex{\\size{w}}} \\ge 1 - 4\\varepsilon^2$.\n \\end{enumerate}\n\\end{corollary}\n\nNote that Corollary~\\ref{cor:core} implies that any constant-factor approximation of the $p_i$'s will do the job,\nleading to the same asymptotic bounds.\nHowever, in the general setting of functions $f_1, \\dots, f_N: \\ensuremath{\\mathcal{D}} \\to \\ensuremath{\\mathbb{R}}$, there is no way of obtaining this much faster than the exact $p_i$'s.\nEven in the case where all $f_i$'s are submodular, the $p_i$'s are hard to\napproximate. \n\n\\begin{remark}\n In general, the best upper bound we know on the peak contributions is $p_i \\le 1$.\n Thus, \\rcorollary{core} tells us that it is correct to invoke~\\ralg{core} with $\\widehat{p_i} = 1$ for all $i \\in [N]$.\n Since $\\kappa > 1$ for $\\varepsilon \\in (0, 1)$, we then have $\\kappa_i = \\min\\{ 1, \\kappa p_i \\} = 1$.\n This results in the initial decomposition, i.\\,e., \\ralg{core} essentially computes nothing -- a sparsifier is not for free!\n\\end{remark}\n\n\\begin{remark}\nThere are various ways to implement~\\ralg{core}, leading to different running time bounds. The representation of the functions involved plays a key role here. In the most general scenario where no further assumptions are made, it is reasonable to assume each $f_i$ is represented by an evaluation oracle with response time $\\ensuremath{\\mathcal{O}}(\\mathsf{EO}_i)$. We may further assume an additional oracle for $F$ with response time $\\ensuremath{\\mathcal{O}}(\\mathsf{EO}_{\\Sigma})$ -- it is in fact the case that, in many applications, $F(S)$ can be computed in a much faster way than by adding up all $f_i(S)$ for $i \\in [N]$.\n\nThe main work to be done for~\\ralg{core} to successfully construct a (small) sparsifier is the computation or approximation of the peak contributions. In the most general setting, we are required to compute\n$p_i$ by iterating through all $A \\in \\ensuremath{\\mathcal{D}}$, which takes time at least $\\Omega(|\\ensuremath{\\mathcal{D}}|)$.\nHence, the running time of a naive implementation is in $\\ensuremath{\\mathcal{O}}\\left( N |\\ensuremath{\\mathcal{D}}| \\sum_{i = 1}^N \\mathsf{EO}_i \\right)$.\nThe main contribution of our work is to show how to approximate $p_i$'s\n efficiently for interesting cases, most notably for monotone $k$-submodular functions\n of low curvature.\n\\end{remark}\n\n\\section{Monotone \\texorpdfstring{$k$}{k}-Submodular Functions of Low Curvature}\n\\label{sec:curvature}\n\n\nLet $F: (k + 1)^E \\to \\ensuremath{\\mathbb{R}_{\\ge 0}}$ be decomposable as $F=\\sum_{i=1}^N f_i$ such that\neach $f_i:(k+1)^E\\to\\ensuremath{\\mathbb{R}_{\\ge 0}}$ is a non-negative monotone $k$-submodular function of low\ncurvature. It follows from the definitions that $F$ is also non-negative, monotone, $k$-submodular, and of\nlow curvature.\n\nWe will show how to approximate the peak contributions \n $p_i = \\max_{\\ensuremath{\\mathbf{A}} \\in (k + 1)^E} \\frac{f_i(\\ensuremath{\\mathbf{A}})}{F(\\ensuremath{\\mathbf{A}})}$.\n\nTo this end, it suffices to approximate \n $\\max_{\\ensuremath{\\mathbf{A}} \\in (k + 1)^E} \\frac{f(\\ensuremath{\\mathbf{A}})}{g(\\ensuremath{\\mathbf{A}})}$\nfor two monotone $k$-submodular functions $f, g: (k + 1)^E \\to \\ensuremath{\\mathbb{R}_{\\ge 0}}$ of low curvature.\nGiven $\\ensuremath{\\mathbf{A}} = (A_1, \\dots, A_k) \\in (k + 1)^E$, we define\n\\begin{equation}\n S_f(\\ensuremath{\\mathbf{A}}) := \\sum_{i = 1}^k \\sum_{e \\in A_i} \\Delta_{e, i}\\left( f\n \\mid \\varnothing, \\dots, \\varnothing \\right) \n\\end{equation}\nand\n\\begin{equation}\n S_g(\\ensuremath{\\mathbf{A}}) := \\sum_{i = 1}^k \\sum_{e \\in A_i} \\Delta_{e, i}\\left( g \\mid \\varnothing, \\dots, \\varnothing \\right).\n\\end{equation}\nIt turns out that $\\frac{S_f(\\ensuremath{\\mathbf{A}}) + f(\\varnothing, \\dots, \\varnothing)}{S_g(\\ensuremath{\\mathbf{A}}) + g(\\varnothing, \\dots, \\varnothing)}$\napproximates $\\frac{f(\\ensuremath{\\mathbf{A}})}{g(\\ensuremath{\\mathbf{A}})}$ well.\n\n\\begin{lemma}\n \\label{lem:approx}\n Let $\\ensuremath{\\mathbf{A}} \\in (k + 1)^E$ be an $(1 - \\varepsilon)$-approximate maximiser\n of $\\frac{S_f(\\ensuremath{\\mathbf{A}}) + f(\\varnothing, \\dots, \\varnothing)}{S_g(\\ensuremath{\\mathbf{A}}) + g(\\varnothing, \\dots, \\varnothing)}$.\n Then\n \\begin{align*}\n \\frac{f(\\ensuremath{\\mathbf{A}})}{g(\\ensuremath{\\mathbf{A}})} \\ge (1 - \\varepsilon)(1 - c_f)(1 - c_g)\n \\frac{f(\\ensuremath{\\mathbf{A}}^*)}{g(\\ensuremath{\\mathbf{A}}^*)}\n \\end{align*}\n for any $\\ensuremath{\\mathbf{A}}^* \\in (k + 1)^E$.\n\\end{lemma}\n\n\\noindent\nSetting $\\varepsilon = 1\/2$ in Lemma~\\ref{lem:approx} gives a $\\frac{1}{2} (1 - c_f) (1 - c_g)$-approximation, which is a constant factor if $c_f$ and $c_g$ are considered constants.\nIt remains to describe how\n $\\max_{\\ensuremath{\\mathbf{A}} \\in (k + 1)^E} \\frac{S_f(\\ensuremath{\\mathbf{A}}) + f(\\varnothing, \\dots, \\varnothing)}{S_g(\\ensuremath{\\mathbf{A}}) + g(\\varnothing, \\dots, \\varnothing)}$\ncan be approximated up to a factor of $1 - \\varepsilon$ (or $1\/2$ specifically in our use case).\nThis is done by a reduction to the \\textsc{modular-ratio-max}\\xspace problem,\\footnote{Note that we allow $A$ and $B$ equal to zero, while the $x_i$'s and $y_i$'s are strictly positive. This is a somewhat technical requirement to avoid division by zero.}\ndefined below, for which\nwe design a fully polynomial approximation scheme (FPTAS).\n\\smallskip\n\n\\begin{tcolorbox}\n \\underline{\\textsc{modular-ratio-max}\\xspace} \\linebreak\n \\textbf{Given:} $x_1, \\dots, x_n, y_1, \\dots, y_n \\in \\ensuremath{\\mathbb{R}}_{> 0}$ and $A, B \\in \\ensuremath{\\mathbb{R}_{\\ge 0}}$. \\\\\n \\textbf{Want:} Index set $\\varnothing \\neq I \\subseteq [n]$ such that $\\varrho(I) := \\frac{A + \\sum_{i \\in I} x_i}{B + \\sum_{i \\in I} y_i}$ is maximal.\n\\end{tcolorbox}\n\nThe reduction to \\textsc{modular-ratio-max}\\xspace is now easy to describe. Recall that\n\\begin{equation}\n \\frac{S_f(\\ensuremath{\\mathbf{A}}) + f(\\varnothing, \\dots, \\varnothing)}{S_g(\\ensuremath{\\mathbf{A}}) + g(\\varnothing, \\dots, \\varnothing)}\n = \\frac{A + \\sum_{i = 1}^k \\sum_{e \\in A_i} \\Delta_{e, i}\\left( f \\mid \\varnothing, \\dots, \\varnothing \\right)}\n {B + \\sum_{i = 1}^k \\sum_{e \\in A_i} \\Delta_{e, i}\\left( g \\mid \\varnothing, \\dots, \\varnothing \\right)}\n\\end{equation}\nwith $A := f(\\varnothing, \\dots, \\varnothing)$ and $B := g(\\varnothing, \\dots, \\varnothing)$.\nIf we number the pairs $(e, i) \\in E \\times [k]$ in some arbitrary order $(e_1, i_1), \\dots, (e_{nk}, i_{nk})$, we find that\nmaximising $\\frac{S_f(\\ensuremath{\\mathbf{A}}) + f(\\varnothing, \\dots, \\varnothing)}{S_g(\\ensuremath{\\mathbf{A}}) + g(\\varnothing, \\dots, \\varnothing)}$\nis the same as maximising\n\\begin{equation}\n \\frac{A + \\sum_{\\ell \\in I} \\Delta_{e_{\\ell}, i_{\\ell}}\\left( f \\mid \\varnothing, \\dots, \\varnothing \\right)}\n {B + \\sum_{\\ell \\in I} \\Delta_{e_{\\ell}, i_{\\ell}}\\left( g \\mid \\varnothing, \\dots, \\varnothing \\right)}\n\\end{equation}\nover all index sets $\\varnothing \\neq I \\subseteq [nk]$ (the $I = \\varnothing$ case can be checked manually or ignored\nas it corresponds to $\\ensuremath{\\mathbf{A}} = (\\varnothing, \\dots, \\varnothing)$).\nSince $f$ and $g$ are monotone, marginal gains are always non-negative, so\n$\\Delta_{e_{\\ell}, i_{\\ell}}\\left( f \\mid \\varnothing, \\dots, \\varnothing \\right) \\ge 0$\nand $\\Delta_{e_{\\ell}, i_{\\ell}}\\left( g \\mid \\varnothing, \\dots, \\varnothing \\right) \\ge 0$ for all $\\ell \\in [nk]$.\nTo satisfy the strict positivity as required in the definition of \\textsc{modular-ratio-max}\\xspace, we can drop the marginal gains that are equal to $0$.\nThis will only shrink the problem.\n\nSumming up, the algorithm to compute $p_i = \\max_{\\ensuremath{\\mathbf{A}} \\in (k + 1)^E} \\frac{f_i(\\ensuremath{\\mathbf{A}})}{F(\\ensuremath{\\mathbf{A}})}$ can be outlined as follows:\n\\begin{itemize}\n \\item Compute a $(1\/2)$-approximation $\\ensuremath{\\mathbf{A}}^*$ to\n $\\max_{\\ensuremath{\\mathbf{A}} \\in (k + 1)^E} \\frac{S_f(\\ensuremath{\\mathbf{A}}) + f(\\varnothing, \\dots, \\varnothing)}{S_g(\\ensuremath{\\mathbf{A}}) + g(\\varnothing, \\dots, \\varnothing)}$\n via the \\textsc{modular-ratio-max}\\xspace reduction and Algorithm \\ref{alg:fptas:mod:ratio}.\n \\item Let $\\widehat{p}_i := \\frac{2}{(1 - c_f)(1 - c_g)}\n \\frac{f(\\ensuremath{\\mathbf{A}}^*)}{g(\\ensuremath{\\mathbf{A}}^*)}$.\n\\end{itemize}\nIt is guaranteed that $\\widehat{p}_i \\ge p_i$ by Lemma \\ref{lem:approx}. Moreover, $\\widehat{p}_i = \\ensuremath{\\mathcal{O}}(1) \\cdot p_i$, so\nwe get a sparsifier of an expected size that matches the existence result by applying the core algorithm.\nMoreover, the algorithm runs in polynomial time, as we will see in Lemma \\ref{lem:iteration:bound}.\n\n\\subsection{FPTAS}\n\nIf $A = B = 0$, \\textsc{modular-ratio-max}\\xspace has a very simple solution: We can just take $I = \\{ i \\}$ for an index $i$ that maximises $x_i \/ y_i$.\nHowever, this is not optimal in general as the following example shows. Let $A = 1$, $B = 100$ and $x_1 = 2$, $y_1 = 3$, $x_2 = 1$, $y_2 = 1$.\nClearly, the ratio $x_i \/ y_i$ is maximised for $i = 2$, leading to an overall $\\varrho$-value of\n\\begin{align*}\n \\varrho(\\{ 2 \\}) = \\frac{1 + 1}{100 + 1} = \\frac{2}{101}\n\\end{align*}\nas opposed to\n\\begin{align*}\n \\varrho(\\{ 1, 2 \\}) = \\frac{1 + 2 + 1}{100 + 3 + 1} = \\frac{4}{104},\n\\end{align*}\nwhich is clearly larger. In fact, it is not hard to see that the maximiser of $x_i \/ y_i$ does not even provide a constant-factor approximation;\nit may end up arbitrarily bad compared to an optimal solution. This indicates that we need to do something else.\n\nThe solution is an FPTAS based on binary search, outlined in Algorithm \\ref{alg:fptas:mod:ratio}.\nThis is possible because we can easily solve the associated decision problem:\nGiven a target value $\\lambda \\in \\ensuremath{\\mathbb{R}}$, does there exist an index set $\\varnothing \\neq I \\subseteq [n]$ such that $\\varrho(I) \\ge \\lambda$?\n\nThe decision problem is simplified by algebraic equivalence transformations:\n\\begin{align*}\n \\varrho(I) \\ge \\lambda\n \\iff \\frac{A + \\sum_{i \\in I} x_i}{B + \\sum_{i \\in I} y_i} \\ge \\lambda\n \\iff A - B\\lambda + \\sum_{i \\in I} \\left( x_i - \\lambda y_i \\right) \\ge 0\n\\end{align*}\nTo see if the last expression is non-negative for any non-empty index set $I$,\nwe consider the indices in non-increasing order of the quantities $x_i - \\lambda y_i$.\nWe have to take first index in this order (as $I \\neq \\varnothing$ is required) and will then take all remaining indices $i$ for which $x_i - \\lambda y_i$\nis positive. This maximises the LHS over all $I \\neq \\varnothing$. If this maximum is non-negative, we know that $\\varrho(I) \\ge \\lambda$ by the\nabove equivalences.\n\nLet $m := \\min\\{ x_1, \\dots, x_n, y_1, \\dots, y_n \\}$ and $M := \\max\\{ x_1, \\dots, x_n, y_1, \\dots, y_n \\}$.\nFor any non-empty index set $I$, we always have\n\\begin{align}\n \\label{eq:rho:lower}\n \\varrho(I) = \\frac{A + \\sum_{i \\in I} x_i}{B + \\sum_{i \\in I} y_i} \\ge \\frac{A + m}{B + n M}\n\\end{align}\nas well as\n\\begin{align}\n \\label{eq:rho:upper}\n \\varrho(I) = \\frac{A + \\sum_{i \\in I} x_i}{B + \\sum_{i \\in I} y_i} \\le \\frac{A + n M}{B + m},\n\\end{align}\nso we can initialise the binary search with $\\varrho^- = \\frac{A + m}{B + n M}$ and $\\varrho^+ = \\frac{A + n M}{B + m}$.\nOnce the interval $\\left[ \\varrho^{-}, \\varrho^{+} \\right]$ has length at most $\\varepsilon \\frac{A + m}{B + n M}$, we know that the multiplicative\nerror is at most $\\varepsilon$. Since the interval size halves in each step, this point is reached after no more than\n$\\left\\lceil \\log{\\frac{1}{\\varepsilon}} + 2 \\left( \\log{n} + \\log{\\frac{M}{m}} \\right) \\right\\rceil$ iterations, as the following lemma shows.\n\n\\begin{lemma}\n \\label{lem:iteration:bound}\n The binary search in Algorithm \\ref{alg:fptas:mod:ratio} terminates\n in $k := \\left\\lceil \\log{\\frac{1}{\\varepsilon}} + 2 \\left( \\log{n} + \\log{\\frac{M}{m}} \\right) \\right\\rceil$ iterations.\n Moreover, the final set $I$ satisfies $\\varrho(I) \\ge (1 - \\varepsilon) \\varrho(I^*)$ for any $\\varnothing \\neq I^* \\subseteq [n]$.\n\\end{lemma}\n\\begin{proof}\n To see the iteration bound, we note that $\\abs{\\varrho^+ - \\varrho^-}$ shrinks by a factor of $2$ in each iteration.\n Thus, after $k$ iterations, it holds\n \\begin{align*}\n \\abs{\\varrho^+ - \\varrho^-} \\le \\frac{1}{2^k} \\abs{\\frac{A + n M}{B + m} - \\frac{A + m}{B + n M}} \\le \\frac{1}{2^k} \\frac{A + n M}{B + m}.\n \\end{align*}\n We want this to be $\\le \\varepsilon \\frac{A + m}{B + n M}$, which is equivalent to\n \\begin{align}\n \\label{eq:k:large:enough}\n \\frac{1}{2^k} \\frac{A + n M}{B + m} \\le \\varepsilon \\frac{A + m}{B + n M}\n \\iff 2^k \\ge \\frac{1}{\\varepsilon} \\frac{A + n M}{A + m} \\frac{B + n M}{B + m}.\n \\end{align}\n Since $n M \\ge m$, we have $\\frac{A + n M}{A + m} \\le \\frac{n M}{m}$ as well as $\\frac{B + n M}{B + m} \\le \\frac{n M}{m}$, hence\n \\begin{align*}\n \\frac{1}{\\varepsilon} \\frac{A + n M}{A + m} \\frac{B + n M}{B + m}\n \\le \\frac{1}{\\varepsilon} \\left( \\frac{n M}{m} \\right)^2,\n \\end{align*}\n so it suffices to satisfy $2^k \\ge \\frac{1}{\\varepsilon} \\left( \\frac{n M}{m}\n \\right)^2$ in order for Equation~(\\ref{eq:k:large:enough}) to hold.\n This is indeed satisfied for any $k \\ge \\log{\\frac{1}{\\varepsilon}} + 2 \\left( \\log{n} + \\log{\\frac{M}{m}} \\right)$, showing the iteration bound.\n\n For the error bound, let $\\varnothing \\neq I^* \\subseteq [n]$ be arbitrary.\n By Equation~(\\ref{eq:rho:lower}), we know that $\\varrho(I^*) \\ge \\frac{A + m}{B + n M}$.\n Moreover, the binary search preserves two invariants:\n \\begin{enumerate}[label = (\\roman*)]\n \\item \\label{inv:1} $\\varrho^- \\le \\varrho(I) \\le \\varrho^+$ for the set $I$ in Algorithm \\ref{alg:fptas:mod:ratio}, and\n \\item \\label{inv:2} There is no set $\\varnothing \\neq I' \\subseteq [n]$ with $\\varrho(I') > \\varrho^+$.\n \\end{enumerate}\n Combining both and the fact that $\\abs{\\varrho^+ - \\varrho^-} \\le \\varepsilon \\frac{A + m}{B + n M}$ at termination, we conclude\n \\begin{align*}\n \\varrho(I) \\ge \\varrho^- = \\varrho^+ - \\left( \\varrho^+ - \\varrho^- \\right)\n \\ge \\varrho^+ - \\varepsilon \\frac{A + m}{B + n M}\n \\ge \\varrho(I^*) - \\varepsilon \\varrho(I^*) = (1 - \\varepsilon) \\varrho(I^*),\n \\end{align*}\n where we also exploited $\\varrho(I') \\ge \\frac{A + m}{B + n M}$\n (Equation~(\\ref{eq:rho:lower})) and we used invariant \\ref{inv:2}\n as $\\varrho(I^*) \\le \\varrho^+$.\n\\end{proof}\n\n\\begin{algorithm}[htbp]\n \\begin{algorithmic}[1]\n \\Require \\textsc{modular-ratio-max}\\xspace instance $x_1, \\dots, x_n, y_1, \\dots, y_n \\in \\ensuremath{\\mathbb{R}}_{> 0}$ and $A, B \\in \\ensuremath{\\mathbb{R}_{\\ge 0}}$;\n error tolerance $\\varepsilon > 0$.\n \\Ensure Index set $\\varnothing \\neq I \\subseteq [n]$ such that $\\varrho(I) \\ge (1 - \\varepsilon) \\varrho(I^*)$ for all $\\varnothing \\neq I^* \\subseteq [n]$.\n \\Procedure{check}{$\\lambda$}\n \\State Sort indices such that $x_{i_1} - \\lambda y_{i_1} \\ge x_{i_2} - \\lambda y_{i_2} \\ge \\dots \\ge x_{i_n} - \\lambda y_{i_n}$\n \\State $I \\gets \\{ i_1 \\}$\n \\State $S \\gets \\left( A - \\lambda B \\right) + \\left( x_{i_1} - \\lambda y_{i_1} \\right)$\n \\For{$\\ell = 2$ \\textbf{to} $n$}\n \\If{$x_{i_\\ell} - \\lambda y_{i_\\ell} > 0$}\n \\State $I \\gets I \\cup \\{ i_{\\ell} \\}$\n \\State $S \\gets S + \\left( x_{i_{\\ell}} - \\lambda y_{i_{\\ell}} \\right)$\n \\EndIf\n \\EndFor\n \\State \\Return $\\displaystyle \\begin{cases}\n I & \\text{ if } S \\ge 0 \\\\\n \\bot & \\text{ otherwise }\n \\end{cases}$\n \\EndProcedure\n \\State $I \\gets \\{ 1 \\}$ \\Comment{initialise with arbitrary feasible solution}\n \\State $m \\gets \\min\\left\\{ x_1, \\dots, x_n, y_1, \\dots, y_n \\right\\}$\n \\State $M \\gets \\max\\left\\{ x_1, \\dots, x_n, y_1, \\dots, y_n \\right\\}$\n \\State $\\varrho^{-} \\gets \\frac{A + m}{B + n M}$\n \\State $\\varrho^{+} \\gets \\frac{A + n M}{B + m}$\n \\While{$\\abs{\\varrho^{+} - \\varrho^{-}} > \\varepsilon \\frac{A + m}{B + n M}$} \\Comment{binary search till multiplicative error is $\\le \\varepsilon$}\n \\State $\\lambda \\gets \\frac{1}{2} \\left( \\varrho^{-} + \\varrho^{+} \\right)$\n \\State $I_{\\lambda} \\gets \\Call{check}{\\lambda}$\n \\If{$I_{\\lambda} = \\bot$}\n \\State $\\varrho^{+} \\gets \\lambda$\n \\Else\n \\State $I \\gets I_{\\lambda}$\n \\State $\\varrho^{-} \\gets \\lambda$\n \\EndIf\n \\EndWhile\n \\State \\Return $I$\n \\end{algorithmic}\n \\caption{FPTAS for \\textsc{modular-ratio-max}\\xspace}\n \\label{alg:fptas:mod:ratio}\n\\end{algorithm}\n\nWe remark that, if all input numbers are integers, we get an iteration bound of\n\\begin{align*}\n k \\le \\left\\lceil \\log{\\frac{1}{\\varepsilon}} + 2 \\left( \\log{n} + \\log{U} \\right) \\right\\rceil\n = \\ensuremath{\\mathcal{O}}\\left( \\log{\\frac{1}{\\varepsilon}} + \\log{n} + \\log{U} \\right),\n\\end{align*}\nwhere $U$ is the largest number that occurs as part of the input.\nWe have now solved \\textsc{modular-ratio-max}\\xspace.\n\n\\subsection{Proof of Lemma~\\ref{lem:approx}}\n\nIn the rest of his section, we will prove Lemma~\\ref{lem:approx}. But first we\nneed a helpful lemma.\n\n\\begin{lemma}\n \\label{lem:mod:ratio:approx}\n Let $f, g: (k + 1)^E \\to \\ensuremath{\\mathbb{R}_{\\ge 0}}$ \n be monotone $k$-submodular of low curvature.\n Then\n \\begin{align*}\n \\frac{(1 - c_f) S_f(\\ensuremath{\\mathbf{A}}) + f(\\varnothing, \\dots, \\varnothing)}{S_g(\\ensuremath{\\mathbf{A}}) + g(\\varnothing, \\dots, \\varnothing)}\n \\le \\frac{f(\\ensuremath{\\mathbf{A}})}{g(\\ensuremath{\\mathbf{A}})} \\le\n \\frac{S_f(\\ensuremath{\\mathbf{A}}) + f(\\varnothing, \\dots, \\varnothing)}{(1 - c_g) S_g(\\ensuremath{\\mathbf{A}}) + g(\\varnothing, \\dots, \\varnothing)}\n \\end{align*}\n for all $\\ensuremath{\\mathbf{A}} \\in (k + 1)^E$.\n\\end{lemma}\n\\begin{proof}\n Fix $\\ensuremath{\\mathbf{A}} \\in (k + 1)^E$\n and label the elements of $E$ in such a way that $A_i = \\left\\{ e^{(i)}_1, \\dots, e^{(i)}_{a_i} \\right\\}$ for each $1 \\le i \\le k$, where $a_i = |A_i|$.\n Now,\n \\begin{align*}\n f(\\ensuremath{\\mathbf{A}}) - f(\\varnothing, \\dots, \\varnothing)\n = \\sum_{i = 1}^k \\sum_{j = 1}^{a_i} \\Delta_{e^{(i)}_j, i}\\left( f \\mid A_1, \\dots, A_{i - 1}, \\{ e^{(i)}_1, \\dots, e^{(i)}_{j - 1} \\},\n \\varnothing, \\dots, \\varnothing \\right).\n \\end{align*}\n The $\\Delta_{e^{(i)}_j, i}\\left( f \\mid A_1, \\dots, A_{i - 1}, \\{ e^{(i)}_1, \\dots, e^{(i)}_{j - 1} \\},\n \\varnothing, \\dots, \\varnothing \\right)$ terms can be estimated in both directions. By the diminishing returns property, we know that\n \\begin{align*}\n \\Delta_{e^{(i)}_j, i}\\left( f \\mid A_1, \\dots, A_{i - 1}, \\{ e^{(i)}_1, \\dots, e^{(i)}_{j - 1} \\},\n \\varnothing, \\dots, \\varnothing \\right)\n \\le \\Delta_{e^{(i)}_j, i}\\left( f \\mid \\varnothing, \\dots, \\varnothing \\right),\n \\end{align*}\n while we conclude\n \\begin{align*}\n \\Delta_{e^{(i)}_j, i}\\left( f \\mid A_1, \\dots, A_{i - 1}, \\{ e^{(i)}_1, \\dots, e^{(i)}_{j - 1} \\},\n \\varnothing, \\dots, \\varnothing \\right)\n \\ge (1 - c_f) \\Delta_{e^{(i)}_j, i}\\left( f \\mid \\varnothing, \\dots, \\varnothing \\right)\n \\end{align*}\n from the curvature of $f$. Since\n \\begin{align*}\n S_f(\\ensuremath{\\mathbf{A}}) := \\sum_{i = 1}^k \\sum_{j = 1}^{a_i} \\Delta_{e^{(i)}_j, i}\\left( f \\mid \\varnothing, \\dots, \\varnothing \\right),\n \\end{align*}\n we see that $(1 - c_f) S_f(\\ensuremath{\\mathbf{A}}) \\le f(\\ensuremath{\\mathbf{A}}) - f(\\varnothing, \\dots, \\varnothing) \\le S_f(\\ensuremath{\\mathbf{A}})$ after combining both inequalities.\n Analogously, we derive the $(1 - c_g) S_g(\\ensuremath{\\mathbf{A}}) \\le f(\\ensuremath{\\mathbf{A}}) - f(\\varnothing, \\dots, \\varnothing) \\le S_g(\\ensuremath{\\mathbf{A}})$ for $g$. Next,\n \\begin{align*}\n \\frac{f(\\ensuremath{\\mathbf{A}})}{g(\\ensuremath{\\mathbf{A}})}\n = \\frac\n {f(\\ensuremath{\\mathbf{A}}) - f(\\varnothing, \\dots, \\varnothing) + f(\\varnothing, \\dots, \\varnothing)}\n {g(\\ensuremath{\\mathbf{A}}) - g(\\varnothing, \\dots, \\varnothing) + g(\\varnothing, \\dots, \\varnothing)}\n \\le \\frac{S_f(\\ensuremath{\\mathbf{A}}) + f(\\varnothing, \\dots, \\varnothing)}{(1 - c_g) S_g(\\ensuremath{\\mathbf{A}}) + g(\\varnothing, \\dots, \\varnothing)}\n \\end{align*}\n and\n \\begin{align*}\n \\frac{f(\\ensuremath{\\mathbf{A}})}{g(\\ensuremath{\\mathbf{A}})}\n = \\frac\n {f(\\ensuremath{\\mathbf{A}}) - f(\\varnothing, \\dots, \\varnothing) + f(\\varnothing, \\dots, \\varnothing)}\n {g(\\ensuremath{\\mathbf{A}}) - g(\\varnothing, \\dots, \\varnothing) + g(\\varnothing, \\dots, \\varnothing)}\n \\ge \\frac{(1 - c_f) S_f(\\ensuremath{\\mathbf{A}}) + f(\\varnothing, \\dots, \\varnothing)}{S_g(\\ensuremath{\\mathbf{A}}) + g(\\varnothing, \\dots, \\varnothing)},\n \\end{align*}\n squeezing $\\frac{f(\\ensuremath{\\mathbf{A}})}{g(\\ensuremath{\\mathbf{A}})}$ between\n $\\frac{(1 - c_f) S_f(\\ensuremath{\\mathbf{A}}) + f(\\varnothing, \\dots, \\varnothing)}{S_g(\\ensuremath{\\mathbf{A}}) + g(\\varnothing, \\dots, \\varnothing)}$ and\n $\\frac{S_f(\\ensuremath{\\mathbf{A}}) + f(\\varnothing, \\dots, \\varnothing)}{(1 - c_g) S_g(\\ensuremath{\\mathbf{A}}) + g(\\varnothing, \\dots, \\varnothing)}$.\n\\end{proof}\n\n\\begin{lemma*}[Lemma~\\ref{lem:approx} restated]\n Let $\\ensuremath{\\mathbf{A}} \\in (k + 1)^E$ be an $(1 - \\varepsilon)$-approximate maximiser\n of $\\frac{S_f(\\ensuremath{\\mathbf{A}}) + f(\\varnothing, \\dots, \\varnothing)}{S_g(\\ensuremath{\\mathbf{A}}) + g(\\varnothing, \\dots, \\varnothing)}$.\n Then\n \\begin{align*}\n \\frac{f(\\ensuremath{\\mathbf{A}})}{g(\\ensuremath{\\mathbf{A}})} \\ge (1 - \\varepsilon)(1 - c_f)(1 - c_g) \\frac{f(\\ensuremath{\\mathbf{A}}^*)}{g(\\ensuremath{\\mathbf{A}}^*)}\n \\end{align*}\n for any $\\ensuremath{\\mathbf{A}}^* \\in (k + 1)^E$.\n\\end{lemma*}\n\\begin{proof}\n Note that $\\frac{f(\\ensuremath{\\mathbf{A}})}{g(\\ensuremath{\\mathbf{A}})}$ and $\\frac{f(\\ensuremath{\\mathbf{A}}^*)}{g(\\ensuremath{\\mathbf{A}}^*)}$ are included in the ranges stated by\n Lemma \\ref{lem:mod:ratio:approx}, i.\\,e.,\n \\begin{align*}\n \\frac{(1 - c_f) S_f(\\ensuremath{\\mathbf{A}}^*) + f(\\varnothing, \\dots, \\varnothing)}{S_g(\\ensuremath{\\mathbf{A}}^*) + g(\\varnothing, \\dots, \\varnothing)}\n \\le \\frac{f(\\ensuremath{\\mathbf{A}}^*)}{g(\\ensuremath{\\mathbf{A}}^*)} \\le\n \\frac{S_f(\\ensuremath{\\mathbf{A}}^*) + f(\\varnothing, \\dots, \\varnothing)}{(1 - c_g) S_g(\\ensuremath{\\mathbf{A}}^*) + g(\\varnothing, \\dots, \\varnothing)}\n \\end{align*}\n and\n \\begin{align*}\n \\frac{(1 - c_f) S_f(\\ensuremath{\\mathbf{A}}) + f(\\varnothing, \\dots, \\varnothing)}{S_g(\\ensuremath{\\mathbf{A}}) + g(\\varnothing, \\dots, \\varnothing)}\n \\le \\frac{f(\\ensuremath{\\mathbf{A}})}{g(\\ensuremath{\\mathbf{A}})} \\le\n \\frac{S_f(\\ensuremath{\\mathbf{A}}) + f(\\varnothing, \\dots, \\varnothing)}{(1 - c_g) S_g(\\ensuremath{\\mathbf{A}}) + g(\\varnothing, \\dots, \\varnothing)}.\n \\end{align*}\n Combining this with the fact that $\\ensuremath{\\mathbf{A}}$ is an $(1 - \\varepsilon)$-approximate maximiser and the non-negativity of $f$ and $g$, we conclude that\n \\begin{align*}\n \\frac{f(\\ensuremath{\\mathbf{A}})}{g(\\ensuremath{\\mathbf{A}})}\n &\\ge \\frac{(1 - c_f) S_f(\\ensuremath{\\mathbf{A}}) + f(\\varnothing, \\dots, \\varnothing)}{S_g(\\ensuremath{\\mathbf{A}}) + g(\\varnothing, \\dots, \\varnothing)} \\\\\n &\\ge (1 - c_f) \\frac{S_f(\\ensuremath{\\mathbf{A}}) + f(\\varnothing, \\dots, \\varnothing)}{S_g(\\ensuremath{\\mathbf{A}}) + g(\\varnothing, \\dots, \\varnothing)} \\\\\n &\\ge (1 - c_f)(1 - \\varepsilon) \\frac{S_f(\\ensuremath{\\mathbf{A}}^*) + f(\\varnothing, \\dots, \\varnothing)}{S_g(\\ensuremath{\\mathbf{A}}^*) + g(\\varnothing, \\dots, \\varnothing)} \\\\\n &\\ge (1 - \\varepsilon)(1 - c_f)(1 - c_g) \\frac{S_f(\\ensuremath{\\mathbf{A}}^*) + f(\\varnothing, \\dots, \\varnothing)}{(1 - c_g) S_g(\\ensuremath{\\mathbf{A}}^*) + g(\\varnothing, \\dots, \\varnothing)} \\\\\n &\\ge (1 - \\varepsilon)(1 - c_f)(1 - c_g) \\frac{f(\\ensuremath{\\mathbf{A}}^*)}{g(\\ensuremath{\\mathbf{A}}^*)}.\n \\end{align*}\n\\end{proof}\n\n{\\small\n\\bibliographystyle{plainurl}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction: super Riemann surfaces, punctures and uniformization}\\label{sec:intro}\n\nModuli spaces of complex supermanifolds of dimension $(1|1)$ and the subclass of those called super Riemann surfaces, are the cornerstones of superstring theory (for a review see \\cite{donagi,witten} as well as the original papers \\cite{CR, schwarz}). \n\nIn our previous work \\cite{PZ,IPZ}, we developed a coordinate system for super Teichm\\\"uller space associated to super Riemann surfaces with punctures, based upon uniformization taking Poincar\\'e metrics. These are generalizations of standard Penner coordinates \\cite{penner,pbook} on the Teichm\\\"uller space. Each such super Riemann surface is defined in this uniformization approach as a factor $\\what{\\mathbb{H}}^+\/\\Gamma$ of a super analogue $\\what{\\mathbb{H}}^+$ of the upper half-plane $\\mathbb{H}^+$, modulo the action of a discrete Fuchsian subgroup $\\Gamma$ of $OSp(1|2)$ acting on $\\what{\\mathbb{H}}^+$ as a superconformal transformation, which is a certain generalization of a standard fractional-linear transformation, see e.g. \\cite{Na}. \n\nThe super Teichm\\\"uller space $S\\mathcal{T}(F)$, where $F$ is the underlying Riemann surface of genus $g$ with $s$ punctures, is defined in full analogy with standard pure even case, viewed as a character variety: \n\\Eq{S\\mathcal{T}(F)=\\mathrm{Hom}'(\\pi_1(F)\\longrightarrow OSp(1|2))\/OSp(1|2),} so that $\\Gamma$ above belongs to the image. Here $\\pi_1(F)$ is the fundamental group of the underlying Riemann surface with punctures, and $\\mathrm{Hom}'$ in \\cite{PZ}, \\cite{IPZ} stands for the homomorphisms which map elements of $\\pi_1(F)$ corresponding to small loops around the punctures, to parabolic elements of $OSp(1|2)$, which means that their natural projections to $PSL(2,\\mathbb{R})$ are parabolic elements. \n\nIt turns out that the dimension of the resulting space is $(6g-6+2s|4g-4+2s)$. \nHowever, it is known from the study of moduli space $\\mathcal{M}(\\Sigma)$ of a super Riemann surface $\\Sigma$ that its dimension is a little more subtle, namely it is \n\\Eq{\\dim \\mathcal{M}(\\Sigma)=(6g-6+2s|4g-4+2n_{NS}+n_R),} where $n_{NS}$ and $n_{R}$ are known as the number of Neveu-Schwarz (NS) and Ramond punctures on $\\Sigma$ respectively. These punctures are the analogs of punctures or marked points on an ordinary Riemann surface, in which on super Riemann surface they are described by codimension $(1|0)$ divisors.\n\nLet us look at these classes of punctures in detail (we refer to \\cite{witten} for more information) from a point of view of complex supermanifolds and compare it to the uniformization approach to understand where this discrepancy in dimension count comes from. \n\nWe recall (see e.g. \\cite{witten}) that a \\emph{super Riemann surface} $\\Sigma$ is a $(1|1)$-dimensional complex supermanifold, i.e. locally isomorphic to $\\mathbb{C}^{1|1}$, together with a subbundle $\\mathcal{D}\\subset T\\Sigma$ of rank $(0|1)$, such that if $D$ is its nonzero section (in some open set $U\\subset \\Sigma$), $D^2=\\frac{1}{2}\\{D,D\\}$ is nowhere proportional to $D$. The local coordinates $(z, \\theta)\\in\\mathbb{C}^{1|1}$, where $z$ is even and $\\theta$ is odd, such that \\Eq{\nD=D_\\theta:=\\partial_{\\theta}+\\theta\\partial_z,} are known as \\emph{superconformal coordinates} and can be chosen for any such nonzero section $D$. The \\emph{superconformal transformations} operating between patches are the ones preserving $\\mathcal{D}$. \n\nThe \\emph{NS puncture} is a natural generalization of the puncture of ordinary Riemann surface, and can be considered as any point $(z_0,\\theta_0)$ on the super Riemann surface. Locally one can associate to it a $(0|1)$-dimensional divisor of the form $z=z_0-\\theta_0\\theta$, which is the orbit with respect to the action of the group generated by $D$, and this divisor uniquely determine the point $(z_0,\\theta_0)$ due to the superconformal structure.\n\nLet us consider the case when the puncture is at $(0,0)$ locally. In its neighborhood let us pick a coordinate transformation \n\\Eq{\nz=e^{w}, \\;\\;\\;\\;\\;\\; \\theta= e^{w\/2}\\eta, \\label{trans1}} such that the neighborhood (without the puncture) is mapped to a \\emph{supertube} with $w$ sitting on a cylinder $w \\sim w+2\\pi i$, and $D_\\theta$ becomes\n\\Eq{\nD_{\\theta}=e^{-w\/2}(\\partial_{\\eta}+\\eta\\partial_{w}).\n} Hence $(w,\\eta)$ are superconformal coordinates, and we have the full equivalence relation given by \\Eq{w \\sim w+2\\pi i, \\;\\;\\;\\;\\;\\; \\eta\\longrightarrow -\\eta.\\label{eq1}}\n\nTherefore, in the uniformization picture, the group element, corresponding to the monodromy around NS puncture should be conjugate to an element of $OSp(1|2)$ corresponding to fractional linear transformation representing translation and an odd element reflection. In other words, an element of a Borel subgroup of $OSp(1|2)$ generated by the maximal negative root (which is an $SL(2,\\mathbb{R})$ generator), accompanied by a fermionic reflection. \n\nThe case of \\emph{Ramond puncture} is a whole different story. On the level of super Riemann surfaces, the associated divisor is determined as follows. In this case, we are looking at the \ncase when the condition that $D^2$ is linearly independent of $D$ is violated along some $(0|1)$ divisor. Namely, in some local coordinates $(z,\\theta)$ near the Ramond puncture with coordinates $(0,0)$, $\\mathcal{D}$ has a section of \nthe form \n\\Eq{D=\\partial_\\theta+z\\theta \\partial_z.}\nWe see that its square vanishes along the $Ramond$ $divisor$ $z=0$. One can map the neighborhood patch to the supertube using a different coordinate transformation \n\\Eq{z=e^{w},\\;\\;\\;\\;\\;\\; \\theta=\\eta\\label{trans2},} those coordinates on the supertube will be superconformal, since \\Eq{\nD=\\partial_{\\eta}+\\eta\\partial_{w}.} Notice that the identifications we have to impose on $(w, \\eta)$ now become: \\Eq{w \\sim w+2\\pi i,\\;\\;\\;\\;\\;\\; \\eta\\longrightarrow+\\eta.\\label{eq2}} Again, we see that the group element corresponding to monodromy around the loop should belong to the same subgroup in the Borel subgroup as in the NS case, just without extra reflection.\n\nWhat lesson did we learn from this discussion? Previously, in \\cite{PZ,IPZ} we obtained a bigger Teichm\\\"uller space than needed for the study of supermoduli, since the constraint we imposed on $\\Gamma$ with respect to the monodromies around punctures was too weak. The condition which is needed to reduce dimension appropriately has to be the following: {\\it group elements, corresponding to the monodromies around punctures, should lie in the conjugacy classes of parabolic elements of the canonical $SL(2,\\mathbb{R})$ subgroup of $OSp(1|2)$}. This constraint will remove the necessary $(0|n_{R})$-dimensional bundle from $S\\mathcal{T}(F)$, which we will call \\emph{Ramond decoration}.\n\nThe main goal of this note is to express this constraint in terms of coordinates obtained in \\cite{PZ,IPZ} and we will see that it is indeed an elegant formula, linear in odd variables. \n\nThe structure of the paper is as follows. In Section \\ref{sec:coord} we review construction of \\cite{PZ,IPZ}. In Section \\ref{sec:frac}, we recall the representation of $OSp(1|2)$ as fractional linear transformation. Section \\ref{sec:mono} is devoted to the explicit study of the monodromy around the punctures and, finally in Section \\ref{sec:dim} we derive the formula for the aforementioned constraint.\n\n\n\n\\section{Coordinates of super Teichm\\\"uller space}\\label{sec:coord}\nLet us first recall the ingredients used to construct the super Teichm\\\"uller space for the case $\\mathcal{N}=1$. We will adapt the notation and construction from \\cite{IPZ} for the case $\\mathcal{N}=2$ which restricts to the case $\\mathcal{N}=1$.\n\n\\subsection{Definition of $OSp(1|2)$}\nLet the supergroup $SL(1|2)$ be $(2|1)\\times (2|1)$ supermatrices with superdeterminant equal to 1, where we write\n\\Eq{g=\\veca{a&\\alpha&b\\\\\\gamma&f&\\beta\\\\gamma&\\delta&d}\\in SL(1|2)}\nsuch that $a,b,c,d,f$ are even entries, and $\\alpha,\\beta,\\gamma,\\delta$ are odd entries, with the supernumber defined over $\\mathbb{R}$. We will consider the component $SL(1|2)_0$ where $f>0$. The \\emph{superdeterminant} or \\emph{Berezinian} is defined to be\n\\Eq{sdet(g):=f^{-1} \\det\\left(\\veca{a&b\\\\gamma&d}+f^{-1} \\veca{\\alpha\\gamma&\\alpha\\delta\\\\ \\beta\\gamma&\\beta\\delta}\n\\right),}\nwhile the \\emph{supertrace} is given by \n\\Eq{str(g):=a+d-f.}\nLet us denote by \n\\Eq{\nJ:=\\veca{0&0&1\\\\0&1&0\\\\-1&0&0}}\nwith $sdet(J)=1$, and define the \\emph{supertranspose} as\n\\Eq{\ng^{st}:=\\veca{a&\\gamma&c\\\\-\\alpha&f&-\\delta\\\\beta&\\beta&d}.}\nThen the supergroup $OSp(1|2)\\subset SL(1|2)_0$ is defined to be the supermatrices $g\\in SL(1|2)_0$ satisfying\n\\Eq{\ng^{st}J g = J.}\nWe have a natural projection $SL(1|2)_0\\longrightarrow SL(2,\\mathbb{R})$ given by\n\\Eq{g=\\veca{a&\\alpha&b\\\\\\gamma&f&\\beta\\\\gamma&\\delta&d}\\mapsto \\frac{1}{\\sqrt{f_\\#}}\\veca{a_\\#&b_\\#\\\\gamma_\\#&d_\\#},}\nwhere $a_\\#$ denotes the {\\it body} of the supernumber $a$.\n\nFinally we denote two special types of matrices in $OSp(1|2)$ by\n\\Eq{\nD_a:=\\veca{a&0&0\\\\0&1&0\\\\0&0&a^{-1}}, \\;\\;\\;\\;\\;\\; Z_a:=\\veca{a&0&0\\\\0&a^2&0\\\\0&0&a}\n}\nthat will be useful later on. The matrix $Z_{-1}$ is also known as the \\emph{fermionic reflection}.\n\\subsection{Decorated super Teichm\\\"uller space}\nLet $F:=F_g^s$ be a Riemann surface with genus $g\\geq 0$ and $s\\geq 1$ punctures such that $2g+s-2>0$. Let $\\Delta$ be an ideal triangulation of $F$ whose vertices are lifted to the set of vertices $\\til{\\Delta}_\\infty$ at infinity on the universal cover $\\til{F}\\simeq \\mathbb{D}$ where $\\mathbb{D}$ is the Poincar\\'e unit disk, and $\\pi_1(F)$ acts on $\\mathbb{D}$ by the Deck transformations.\n\nThe coordinates of the decorated super Teichm\\\"uller space $S\\til{\\mathcal{T}}(F)$ can be realized in Minkowski space $\\mathbb{M}=\\mathbb{R}^{2,1|2}$ with inner product between two vectors $A=(x_1,x_2,y|\\phi,\\theta),A'=(x_1',x_2',y'|\\phi',\\theta')$ in $\\mathbb{M}$ given by\n\\Eq{\n\\:=\\frac12(x_1x_2'+x_1'x_2)-yy'+\\phi\\theta'+\\phi'\\theta,}\nwhere the square-root of such inner product is called a \\emph{$\\lambda$-length}. We define the \\emph{positive light cone} to be\n\\Eq{\n\\mathcal{L}:=\\{ A\\in \\mathbb{M}: \\=0, x_1>0, x_2>0\\}.}\nAny point in $\\mathbb{M}$ can also be represented as \n\\Eq{\nM_c=\\veca{x_1&\\phi&y-c\\\\-\\phi&c&\\-\\theta\\\\ y+c&\\theta&x_2}\\in \\mathbb{M},}\nwhere $\\mathcal{L}$ corresponds to the subspace with $c=0$. Then $OSp(1|2)$ acts naturally on $\\mathbb{M}$ and $\\mathcal{L}$ by the adjoint action\n\\Eq{g\\cdot M_c:= g^{st} M_c g,\\;\\;\\;\\;\\;\\; g\\in OSp(1|2),} and every vectors in the light cone $\\mathcal{L}$ can be put into the form ${e_\\theta:=(1,0,0|0,\\pm\\theta)\\in \\mathcal{L}}$ for some odd parameter $\\theta$. The $OSp(1|2)$-orbit of\n$$e_0:=(1,0,0|0,0)\\in \\mathcal{L},$$\nis of special importance, and we will refer to\n\\Eq{\\mathcal{L}_0:=OSp(1|2)\\cdot e_0\\subset \\mathcal{L}}\nas the \\emph{special light cone}.\n\n\\begin{Lem}\\cite{PZ} Let $\\Delta ABC$ be a \\emph{positive triple} (i.e. the bosonic part of $(A,B,C)$ are positively oriented) in the special light cone $\\mathcal{L}_0$. Then there is a unique $g\\in OSp(1|2)$ (up to composition by the fermionic reflection $Z_{-1}$), unique even $r,s,t>0$, and odd $\\phi$ such that\n\\Eq{g\\cdot A=r(0,1,0|0,0),\\;\\;\\;\\;\\;\\; g\\cdot B=t(1,1,1|\\phi,\\phi),\\;\\;\\;\\;\\;\\; g\\cdot C=s(1,0,0|0,0).\n\\label{standpos}}\n\\end{Lem}\nCoordinates of the form \\eqref{standpos} are said to be in \\emph{standard position}. We then have two associated transformations:\n\\begin{Lem}The \\emph{prime transformations} $P_\\theta^\\pm$ are given by\n\\Eq{P_\\theta^+:=\\veca{-1&\\theta&1\\\\-\\theta&1&0\\\\-1&0&0},\\;\\;\\;\\;\\;\\; P_\\theta^-:=(P_\\theta^+)^{-1} =\\veca{0&0&-1\\\\0&1&-\\theta\\\\1&-\\theta&-1},}\nwhich rotates the standard position with odd parameters $\\theta$, from $\\Delta ABC$ to $\\Delta BCA$ and $\\Delta CAB$ respectively.\n\nThe \\emph{upside-down} transformations $\\Upsilon^\\chi$ are given by\n\\Eq{\\Upsilon^{\\chi}:=J\\circ D_{\\sqrt{\\chi}},\n}\nwhich sends the standard position from $\\Delta ABC$ to $\\Delta CDA$ in a quadrilateral $\\diamondsuit ABCD$ with \\emph{cross ratio}\n\\Eq{\n\\chi:=\\frac{ac}{bd},}\nwhere $a^2=\\, b^2=\\, c^2=\\$ and $d^2=\\$.\n\\end{Lem}\nOne readily checks that $P^{\\theta,\\pm}$ and $\\Upsilon^\\chi$ are all elements in $OSp(1|2)$.\n\\begin{Def} The \\emph{decorated super Teichm\\\"uller space} $S\\til{\\mathcal{T}}(F)$ is the space of $OSp(1|2)$-orbits of lifts \n\\Eq{\\ell:\\til{\\Delta}_\\infty\\longrightarrow\\mathcal{L}_0,} which are $\\pi_1$-equivariant for some super Fuchsian representation $\\what{\\rho}:\\pi_1\\longrightarrow OSp(1|2)$. More precisely,\n\\begin{itemize}\n\\item[(1)] $\\what{\\rho}(\\gamma)(\\ell(a)) = \\ell(\\gamma(a))$ for each $\\gamma\\in \\pi_1$ and $a\\in \\til{\\Delta}_\\infty$;\n\\item[(2)] the natural projection\n\\Eq{\n\\rho:\\pi_1\\xto{\\what{\\rho}} OSp(1|2)\\longrightarrow SL(2,\\mathbb{R})\\longrightarrow PSL(2,\\mathbb{R})}\nis a Fuchsian representation.\n\\end{itemize}\n\\end{Def}\nIn \\cite{PZ}, the space of all such lifts are constructed by a recursive procedure to lift the ideal triangles of $\\til{F}$ on the universal cover to the light cone, using the ``basic calculations\" to determine the $OSp(1|2)$-orbits on the light cone. \nThe construction is subsequently simplified and generalized to $\\mathcal{N}=2$ in \\cite{IPZ}, where the construction of the lift was directly connected to the combinatorial description of spin structures, discovered in \\cite{PZ}. There, the spin structures were identified with classes of orientations of the trivalent fatgraph spine $\\tau$, dual \nto the triangulation $\\Delta$: two orientations belong to the same equivalence class if they are related by sequence of reversals of orientation of all edges incident to a given vertex. It was shown in \\cite{PZ}, that under the elementary Whitehead move (flip), the orientations change as in Figure \\ref{flipgraphint} in the generic situation. \n\\begin{figure}[h!]\n\\begin{center}\n\\begin{tikzpicture}[ ultra thick, baseline=1cm]\n\\draw (0,0)--(210:1) node[above] at (210:0.7){$\\epsilon_2$};\n\\draw (0,0)--(330:1) node[above] at (330:0.7){$\\epsilon_4$};\n \n \\draw[ \n \tultra thick,\n decoration={markings, mark=at position 0.5 with {\\arrow{>}}},\n postaction={decorate}\n ]\n (0,0) -- (0,2);\n\\draw[yshift=2cm] (0,0)--(30:1) node[below] at (30:0.7){$\\epsilon_3$};\n\\draw[yshift=2cm] (0,0)--(150:1) node[below] at (150:0.7){$\\epsilon_1$};\n\\end{tikzpicture}\n\\begin{tikzpicture}[baseline]\n\\draw[->, thick](0,0)--(2,0);\n\\node[above] at (1,0) {};\n\\node at (-1,0){};\n\\node at (3,0){};\n\\end{tikzpicture}\n\\begin{tikzpicture}[ultra thick, baseline]\n\\draw (0,0)--(120:1) node[above] at (100:0.3){$\\epsilon_1$};\n\\draw (0,0)--(240:1) node[below] at (260:0.3){$\\epsilon_2$};\n \n \\draw[ \n decoration={markings, mark=at position 0.5 with {\\arrow{<}}},\n postaction={decorate}\n ]\n (0,0) -- (2,0);\n\\draw[xshift=2cm] (0,0)--(60:1) node[above] at (100:0.3){$-\\epsilon_3$};\n\\draw [xshift=2cm](0,0)--(-60:1) node[below] at (-80:0.3){$\\epsilon_4$};\n\\end{tikzpicture}\n\\end{center}\n\\caption{Spin graph evolution in the generic situation}\n\\label{flipgraphint}\n\\end{figure}\nThere $\\epsilon_i$ stands for the orientation of the corresponding edge and negative sign represents orientation reversal.\n\nIt was also explicitly shown that this description of spin structures is compatible with Natazon's description \\cite{Na} of a choice of lift $\\til{\\rho}:\\pi_1\\longrightarrow OSp(1|2)\\longrightarrow SL(2,\\mathbb{R})$ of the Fuchsian representation $\\rho$.\n\n\nAltogether this leads to the following description of the coordinates on $S\\til{\\mathcal{T}}(F)$:\n\\begin{Thm}\ni) The components of $S\\til{\\mathcal{T}}(F)$ is determined by the space of spin structures on $F$. To each component $C$ of $S\\til{\\mathcal{T}}(F)$, there are global affine coordinates on $C$ given by assigning to a triangulation $\\Delta$ of $F$, \n\\begin{itemize}\n\\item one even coordinate called $\\lambda$-length for each edge; \n\\item one odd coordinate called $\\mu$-invariant for each triangle, taken modulo an overall change of sign.\n\\end{itemize} In particular we have a real-analytic homeomorphism\n\\Eq{C\\longrightarrow \\mathbb{R}_{>0}^{6g-6+3s|4g-4+2s}\/\\mathbb{Z}_2.}\nii) The super Ptolemy transformations \\cite{PZ} provide the analytic relations between coordinates assigned to different choice of triangulation $\\Delta'$ of $F$, namely upon flip transformation. Explicitly (see Figure \\ref{ptolemy}), when all $a,b,c,d$ are different edges of the triangulations of $F$, Ptolemy transformations are as follows:\n\n\\begin{figure}[h!]\n\n\\centering\n\n\\begin{tikzpicture}[scale=0.6, baseline,ultra thick]\n\n\\draw (0,0)--(3,0)--(60:3)--cycle;\n\n\\draw (0,0)--(3,0)--(-60:3)--cycle;\n\n\\draw node[above] at (70:1.5){$a$};\n\n\\draw node[above] at (30:2.8){$b$};\n\n\\draw node[below] at (-30:2.8){$c$};\n\n\\draw node[below=-0.1] at (-70:1.5){$d$};\n\n\\draw node[above] at (1.5,-0.15){$e$};\n\n\\draw node[left] at (0,0) {};\n\n\\draw node[above] at (60:3) {};\n\n\\draw node[right] at (3,0) {};\n\n\\draw node[below] at (-60:3) {};\n\n\\draw node at (1.5,1){$\\theta$};\n\n\\draw node at (1.5,-1){$\\sigma$};\n\n\\end{tikzpicture}\n\\begin{tikzpicture}[baseline]\n\n\\draw[->, thick](0,0)--(1,0);\n\n\\node[above] at (0.5,0) {};\n\n\\end{tikzpicture}\n\\begin{tikzpicture}[scale=0.6, baseline,ultra thick]\n\n\\draw (0,0)--(60:3)--(-60:3)--cycle;\n\n\\draw (3,0)--(60:3)--(-60:3)--cycle;\n\n\\draw node[above] at (70:1.5){$a$};\n\n\\draw node[above] at (30:2.8){$b$};\n\n\\draw node[below] at (-30:2.8){$c$};\n\n\\draw node[below=-0.1] at (-70:1.5){$d$};\n\n\\draw node[left] at (1.7,1){$f$};\n\n\\draw node[left] at (0,0) {};\n\n\\draw node[above] at (60:3) {};\n\n\\draw node[right] at (3,0) {};\n\n\\draw node[below] at (-60:3) {};\n\n\\draw node at (0.8,0){$\\mu$};\n\n\\draw node at (2.2,0){$\\nu$};\n\n\\end{tikzpicture}\\\\\n\\caption{Generic flip transformation}\n\\label{ptolemy}\n\\end{figure}\n\n \n\n\\begin{eqnarray}\n&& ef=(ac+bd)\\Big(1+\\frac{\\sigma\\theta\\sqrt{\\chi}}{1+\\chi}\\Big),\\nonumber\\\\\n&&\\nu=\\frac{\\sigma+\\theta\\sqrt{\\chi}}{\\sqrt{1+\\chi}},\\quad\n\\mu=\\frac{\\sigma\\sqrt{\\chi}-\\theta}{\\sqrt{1+\\chi}},\n\\end{eqnarray}\nwhere $\\chi=\\frac{ac}{bd}$, so that the evolution of arrows is as in Figure \\ref{flipgraphint}.\n\\end{Thm}\n\n\n\n\nThe decorated super Teichm\\\"uller space $S\\til{\\mathcal{T}}(F)$ is naturally a principal $\\mathbb{R}_+^s$-bundle over the super Teichm\\\"uller space $S\\mathcal{T}(F)$ defined in the introduction, and one can descend to $S\\mathcal{T}(F)$ by taking the appropriate shear coordinates around punctures as in the bosonic case \\cite{penner}. \n\\subsection{Construction of the lift}\nIn this section we recall the construction of the lift $\\what{\\rho}:\\pi_1(F)\\longrightarrow OSp(1|2)$ given by the coordinates of the decorated Teichm\\\"uller space described in \\cite{PZ, IPZ}. Let us fix a spin structure $\\omega$ corresponding to the component of this lift, which is represented by an orientation of the fatgraph spine $\\tau$ of the triangulation. \n\nThe fundamental domain $\\textbf{D}\\subset\\til{F}$ on the universal cover of $F$ is naturally a $(4g+2s)$-gon. It suffices to determine the image of the generators $\\gamma_i\\in \\pi_1(F)$, $i=1,...,2g+s$, which identifies a pair of frontier edges $c_i,c_i'$ of $\\textbf{D}$. Let $c_i'=\\gamma_i(c_i)$. To determine the image of $\\what{\\rho}(\\gamma_i)\\in OSp(1|2)$, let $\\Delta ABC$ and $\\Delta A'B'C'$ be the lift of the unique pair of triangles such that $BC=\\ell(c_i), B'C'=\\ell(c_i'), \\ell^{-1} (\\Delta ABC)\\subset \\textbf{D}$, and $\\ell^{-1}(\\Delta A'B'C')\\not\\subset \\textbf{D}$. Then by definition of $\\ell$ there is a unique transformation $g\\in OSp(1|2)$ bringing the standard position from $\\Delta ABC$ to $\\Delta A'B'C'$ and matching $BC$ to $B'C'$. Explicitly let $\\gamma_i$ be homotopically represented by a path in $\\tau$. Then \n\\begin{Prop}\nThe image $\\what{\\rho}(\\gamma_i):=g\\in OSp(1|2)$ is a composition of the form\n\\Eq{\n\\what{\\rho}(\\gamma_i)=\\prod_k Z_{\\epsilon_k}\\circ \\Upsilon^{\\chi_k}\\circ P_{\\theta_k}^{\\pm}\\in OSp(1|2),}\nwhere\n\\begin{itemize}\n\\item $\\epsilon_k\\in\\{1,-1\\}$ according to whether the segment of $\\gamma_i$ is aligned with the orientation of $w$.\n\\item $\\Upsilon^{\\chi_k}$ is the upside-down transformation corresponding to the pair of triangles crossing the segment of $\\gamma_i$.\n\\item $P_{\\theta_k}^{\\pm}$ is the prime transformation corresponding to $\\gamma_i$ turning left ($+$) or right ($-$) at the vertex of $\\tau$.\n\\end{itemize}\n\\end{Prop}\n\\section{$OSp(1|2)$ as fractional linear transform}\\label{sec:frac}\nIn this section, we recall the representation of $OSp(1|2)$ as fractional linear transformation on the super upper half-plane.\n\nRecall that in the bosonic case, $PSL(2,\\mathbb{R})$ acts transitively on the upper half-plane $\\mathbb{H}^+:=\\{x+iy|y>0\\}\\subset \\mathbb{C}$ by\n\\Eq{z\\mapsto \\frac{az+b}{cz+d},}\nwhere $z=x+iy$ and $\\veca{a&b\\\\gamma&d}\\in PSL(2,\\mathbb{R})$. \n\nIn the super case, we have an analogue given as follows. Let $\\mathbb{C}^{1|1}$ be the complex superplane, and consider the super upper half-plane $\\what{\\mathbb{H}}^+:=\\{(z,\\eta)|Im(z_\\#)>0\\}\\subset \\mathbb{C}^{1|1}$ where $z_\\#$ denote the body of $z$. Then it is well-known that\n\\begin{Prop}\\cite{CR, PZ, witten} An element $g=\\veca{a&\\alpha&b\\\\\\gamma&f&\\beta\\\\gamma&\\delta&d}\\in OSp(1|2)$ acts transitively on $\\what{\\mathbb{H}}^+$ by the superconformal transformations\n\\Eq{\nz&\\mapsto \\frac{az+b}{cz+d}+\\eta\\frac{\\gamma z+\\delta}{(cz+d)^2},\\\\\n\\eta&\\mapsto \\frac{\\gamma z+\\delta}{cz+d}+\\eta\\frac{1+\\frac{1}{2} \\delta\\gamma}{cz+d}.\n}\n\\end{Prop}\nIn particular, a translation of the form $z\\mapsto z+b$ is given by an element of a Borel subgroup of the form $g=\\veca{\\pm1&0&b\\\\0&1&0\\\\0&0&\\pm1}$ which belongs to the canonical $SL(2,\\mathbb{R})$ subgroup of $OSp(1|2)$.\n\nReturn to our setting, recall from the introduction that we considered the conformal transformations \\eqref{trans1}, \\eqref{trans2} from the neighborhood around the punctures to the supertube. If we restrict to the neighborhood within $0<|z|<1$, then the image of the transformation becomes the left half-plane $\\{(z,\\eta)|Re(z_\\#)<0\\}\\subset\\mathbb{C}^{1|1}$ instead. Therefore rotating our setting by 90 degree, the equivalence relations given by the simple translation of the even variable \\eqref{eq1}, \\eqref{eq2} in the complex direction are then represented by the action of the lower Borel elements\n\\Eq{\\veca{\\pm1&0&0\\\\0&1&0\\\\beta&0&\\pm1}\\in OSp(1|2).}\n\n\\section{Monodromies and decorations}\\label{sec:mono}\nWe are now ready to discuss the monodromies around punctures. The construction of the map $\\what{\\rho}:\\pi_1(F)\\longrightarrow OSp(1|2)$ requires that the group elements $\\what{\\rho}(\\gamma)$ corresponding to loops $\\gamma$ going around punctures, which being projected to $PSL(2,\\mathbb{R})$ is parabolic, i.e. is conjugate to $\\pm\\veca{1&0\\\\beta&1}$.\n\nFor a loop $\\gamma\\in \\pi_1(F)$ around a puncture, by definition the monodromy $\\what{\\rho}$ should fix the lift of the puncture $A\\in \\mathcal{L}_0$ which belongs to a sequence of triangles $\\Delta AB_{i+1}B_i$ (see Figure \\ref{punc}). Acting by $OSp(1|2)$ if necessary, let us choose the standard position such that the puncture is lifted to $A=r(0,1,0|0,0)$ for some $r>0$.\n\\begin{figure}[htb!]\n\\centering\n\\begin{tikzpicture}[baseline=(0), every node\/.style={inner sep=0, minimum size=0.2cm, circle, fill=black }, x=0.7cm, y=0.7cm]\n\\node[red] (0) at (0,0){};\n\\node [red,fill=none] at (0,-0.5){$A$};\n\\node[gray] (1) at (0:4){};\n\\node[gray] (2) at (60:4){};\n\\node[gray] (3) at (120:4){};\n\\node[gray] (4) at (180:4){};\n\\node[gray] (5) at (240:4){};\n\\node[gray] (6) at (300:4){};\n\\node [gray, fill=none] at (0:4.5){$B_1$};\n\\node [gray, fill=none] at (60:4.5){$B_2$};\n\\node [gray, fill=none] at (120:4.5){$B_3$};\n\\node [gray, fill=none] at (180:4.5){$B_4$};\n\\node [gray, fill=none] at (240:4.5){$B_5$};\n\\node [gray, fill=none] at (300:4.5){$B_n$};\n\\node [gray, fill=none] at (0,-4){\\huge $\\cdots$};\n\\draw[gray] (1)--(2)--(3)--(4)--(5)--(6)--(1);\n\\draw[gray] (1)--(0)--(2);\n\\draw[gray] (3)--(0)--(4);\n\\draw[gray] (5)--(0)--(6);\n\\draw[red,->-, very thick] (30:2.3)--(90:2.3);\n\\draw[red,->-, very thick] (90:2.3)--(150:2.3);\n\\draw[red,->-, very thick] (150:2.3)--(210:2.3);\n\\draw[red,->-, very thick] (210:2.3)--(270:2.3);\n\\draw[red,->-, very thick] (270:2.3)--(330:2.3);\n\\draw[red,->-, very thick] (330:2.3)--(30:2.3);\n\\draw (30:2.3)--(30:4.3);\n\\draw (90:2.3)--(90:4.3);\n\\draw (150:2.3)--(150:4.3);\n\\draw (210:2.3)--(210:4.3);\n\\draw (270:2.3)--(270:4.3);\n\\draw (330:2.3)--(330:4.3);\n\\node [rotate=30, red, fill=none] at (300:2.3){\\huge$\\cdots$};\n\\node [red, fill=none] at (2.5,-0.5){$\\gamma$};\n\\node [red, fill=none, below right] at (30:2.3){$\\tau_1$};\n\\node [red,fill=none, below] at (90:2.3){$\\tau_2$};\n\\node [red,fill=none, below right] at (150:2.3){$\\tau_3$};\n\\node [red,fill=none, below] at (210:2.3){$\\tau_4$};\n\\node [red,fill=none, below right] at (270:2.3){$\\tau_5$};\n\\node [red,fill=none, below] at (330:2.3){$\\tau_n$};\n\\end{tikzpicture}\n\\caption{The loop $\\gamma$ in $\\tau$ around a puncture $A$ surrounded by $n$ triangles}\n\\label{punc}\n\\end{figure}\n\nThen by definition of the adjoint action of $OSp(1|2)$ on $\\mathcal{L}_0$, a generic element $\\what{\\rho}(\\gamma)$ of the monodromy is given by a lower triangular matrix\n\\Eq{\ng=\\veca{\\pm1&0&0\\\\\\theta&1&0\\\\beta&\\phi&\\pm1}\\in OSp(1|2),\\label{gg}}\nand the conditions for it to be in $OSp(1|2)$ require that $\\phi=\\mp \\theta$, hence we only have one odd parameter $\\theta$.\n\nLet us call the punctures corresponding to the monodromy whose projection to $SL(2,\\mathbb{R})$ has trace $2$ (resp. $-2$) as Ramond (resp. Neveu-Schwarz (NS)) punctures. \n\nIn the introduction we pointed out that in order to obtain the punctured super Riemann surfaces with proper boundary conditions on Ramond and NS punctures, we need to impose the condition that the monodromy element around the puncture be conjugate to a simple translation of the even variable in the transformed space, in which $OSp(1|2)$ acts by fractional-linear transformation. We saw in the previous section that this corresponds to an element of the lower Borel subgroup of the $SL(2,\\mathbb{R})$ subgroup of $OSp(1|2)$.\n\nHence we now tighten things a bit and require also that the element $g$ in \\eqref{gg} above be conjugate to \n\\Eq{\\veca{\\pm1&0&0\\\\0&1&0\\\\beta&0&\\pm1}\\in OSp(1|2)\\label{standform}}\nby elements of $OSp(1|2)$ which fix the puncture. In other words, \n\\begin{Def}[Monodromy constraint]\nFor every lift of puncture $p\\in \\mathcal{L}_0$, the $OSp(1|2)$-orbit of pairs $(p,g)\\in \\mathcal{L}_0\\times OSp(1|2)$ under the action given by\n\\Eq{\nU\\cdot (p,g) = (U\\cdot p, UgU^{-1}),\\;\\;\\;\\;\\;\\; U\\in OSp(1|2),\n}\nis required to contain a point $(p,g_0)$ where $g_0$ is of the form \\eqref{standform}.\n\\end{Def}\nLet us see how these constraints affect each type of punctures.\n\n\n\\begin{Lem} In the case of NS puncture, the monodromy constraint is always satisfied.\n\\end{Lem}\n\\begin{proof} Take $U:=\\veca{1&0&0\\\\\\theta\/2&1&0\\\\0&-\\theta\/2&1}$, then it fixes $p=A$. We have $$U^{-1}=\\veca{1&0&0\\\\-\\theta\/2&1&0\\\\0&\\theta\/2&1}$$ and\n$$U\\cdot\\veca{-1&0&0\\\\\\theta&1&0\\\\beta&-\\theta&-1}\\cdot U^{-1} =\\veca{-1&0&0\\\\0&1&0\\\\beta&0&-1}$$\nas required.\n\\end{proof}\nHowever, for Ramond puncture, the situation is different.\n\\begin{Lem}\nThe monodromy constraint is satisfied if and only if $\\theta=0$.\n\\end{Lem}\n\\begin{proof}\nThe conjugation by a general $U$ can be rewritten as\n$$\\veca{a&\\alpha&b\\\\\\gamma&f&\\beta\\\\gamma&\\delta&d}\\veca{1&0&0\\\\\\theta&1&0\\\\B&-\\theta&1}=\\veca{1&0&0\\\\0&1&0\\\\B'&0&1}\\veca{a&\\alpha&b\\\\\\gamma&f&\\beta\\\\gamma&\\delta&d}$$\nwhich leads to the constraints\n\\Eqn{\nBb=\\alpha\\theta, \\;\\;\\;\\;\\;\\; b\\theta=\\beta\\theta=0, \\;\\;\\;\\;\\;\\; f\\theta+B\\beta=0, \\;\\;\\;\\;\\;\\; B'a+\\theta\\delta=Bd, \\;\\;\\;\\;\\;\\; B'\\alpha+d\\theta=0.}\nSince we also need to fix the vector corresponding to the puncture $p=A$, the conjugation requires $U$ to be lower triangular. In particular $\\beta=0$, hence $f\\theta=0\\=> \\theta=0$ since $f>0$.\n\\end{proof}\nTherefore to get a ``true\" super Teichm\\\"uller space, we would like to impose the conditions $\\theta=0$ to our group elements corresponding to monodromies around Ramond punctures.\n\\section{Dimension reduction}\\label{sec:dim}\nIn this section, we show that the conditions that $\\theta=0$ for monodromies around Ramond punctures will impose a linear constraint in terms of the odd coordinates of $S\\til{\\mathcal{T}}(F)$. In particular, this will reduce the dimension of $S\\til{\\mathcal{T}}(F)$, and hence $S\\mathcal{T}(F)$, making the overall odd dimension $4g-4+2n_{NS}+n_R$, where $n_{NS}$ and $n_R$ are the number of NS and Ramond punctures respectively, as explained in the introduction.\n\nFix a fatgraph orientation $\\omega$ on the fatgraph spine $\\tau$ corresponding to a spin structure. Consider a loop homotopic to $\\gamma=(\\tau_1,\\tau_2,...,\\tau_n,\\tau_1)\\in \\pi_1(F)$ on $\\tau$ passing through the vertices $\\tau_i\\in\\tau$ associated to the triangles around a Ramond puncture, and we assume $\\gamma$ is going in a counter-clockwise direction (cf. Figure \\ref{punc}). According to our construction, the monodromy is given by\n\\Eq{\n\\what{\\rho}(\\gamma)=\\prod_{k=1}^n Z_{\\epsilon_k}\\circ \\Upsilon^{\\chi_k}\\circ P_{\\theta_k}^+\\in OSp(1|2),}\nwhere the product is read from right to left, $\\theta_i$ is the odd parameter for the triangle $\\tau_i$, $\\chi_i$ is the cross ratio between $\\tau_i$ and $\\tau_{i+1}$, and $\\epsilon_i=-1$ if the path $\\gamma$ at $\\tau_i\\longrightarrow \\tau_{i+1}$ has the same orientation as $\\omega$ or else $\\epsilon_i=+1$ otherwise.\n\\begin{Prop} $\\what{\\rho}(\\gamma)\\in OSp(1|2)$ is of the form\n\\Eq{\n\\what{\\rho}=\\veca{*_0&0&0\\\\-*_1&1&0\\\\-*_3&*_2&*_0},}\nwhere $c_k:=-\\epsilon_k\\sqrt{\\chi_k}$, and\n\\Eq{\n*_0&:=\\prod_{k=1}^nc_k=(-1)^n\\prod_{k=1}^n \\epsilon_k,\\\\\n*_1&:=\\sum_{k=1}^{n} \\theta_k\\prod_{j=1}^{k-1}c_j^{-1},\\\\\n*_2&:=\\sum_{k=1}^n \\theta_k \\prod_{j=k}^n c_j = (*_0)(*_1),\\\\\n*_3&:=\\sum_{j=1}^n\\left(\\prod_{k=1}^{j-1}c_k^{-1}\\prod_{k=j}^nc_k\\right)+\\sum_{1\\leq i0$, i.e. the length $n$ of the loop $\\gamma$, and the number of segments of $\\gamma$ matching orientations with $\\omega$, have the same parity.\n\\end{Rem}\nTherefore, the monodromy constraints for the Ramond punctures amount to $*_1=0$, and we arrive at the main result of the paper:\n\\begin{Thm}\nThe monodromy constraints for the Ramond punctures are given by the following linear equation of $\\theta_i$ around each Ramond puncture of the surface:\n\\Eq{\n&\\sum_{i=1}^{n} \\theta_i\\prod_{j=1}^{i-1}c_i^{-1}=0\\nonumber\\\\\n\\Longleftrightarrow & \\theta_1-\\theta_2\\frac{\\epsilon_1}{\\sqrt{\\chi_1}}+\\theta_3\\frac{\\epsilon_1\\epsilon_2}{\\sqrt{\\chi_1\\chi_2}}-.... +(-1)^{n-1}\\theta_n\\frac{\\epsilon_1...\\epsilon_{n-1}}{\\sqrt{\\chi_1...\\chi_{n-1}}}= 0,\n}\nor upon multiplication by $*_0$:\n\\Eq{\n&\\sum_{i=1}^{n} \\theta_i\\prod_{j=i}^{n}c_i=0\\nonumber\\\\\n\\Longleftrightarrow & \\theta_1\\sqrt{\\chi_1...\\chi_n}\\epsilon_1...\\epsilon_n - \\theta_2\\sqrt{\\chi_2...\\chi_n}\\epsilon_2...\\epsilon_n +... +(-1)^{n-1}\\theta_n\\sqrt{\\chi_n}\\epsilon_n=0.\n}\n\\end{Thm}\nThese constraints will remove the necessary $(0|n_{R})$-dimensional bundle from $S\\mathcal{T}(F)$, which we will call the \\emph{Ramond decoration}.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzgrwc b/data_all_eng_slimpj/shuffled/split2/finalzzgrwc new file mode 100644 index 0000000000000000000000000000000000000000..9512faa446518ad529ec041a3e7473033b438904 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzgrwc @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:intro}\n\n\\section{Introduction}\\label{sec:intro}\n\n\n\nHamiltonian ordinary differential equations and their generalisation, Poisson systems, \nare extensively used as mathematical models to describe the dynamical evolution of various physical systems \nin science and engineering. The flow of a Hamiltonian system is known to be a symplectic map, whereas the flow of a Poisson system is a Poisson map.\nThis is a key property that the flow of a numerical method should also have. \nThe recent years have thus witnessed a large amount of research activities in the design and numerical analysis of symplectic numerical schemes, resp. Poisson integrators, for deterministic (non-canonical) Hamiltonian systems, see for instance the classical monographs \\cite{Sanz-Serna1994,rl,HLW02,bcGNI15} and references therein. \n\nThis research has naturally come to the realm of stochastic Hamiltonian systems. Without being too exhaustive, we mention the works \n\\cite{Milstein2002,Milstein2002a,w07,MR2491434,m10,bb12,\nmdd12,MR3094570,MR3218332,MR3195572,MR3552716,MR3579605,MR3882980,MR3873562,MR3952248,chjs21} \non the numerical analysis of symplectic methods for stochastic Hamiltonian systems. \n\nSince symplectic methods for stochastic Hamiltonian systems offer advantages compared to \nstandard numerical methods, as observed in the above list of references, \nit is natural to ask if one can derive numerical integrators respecting the structure of \nstochastic Poisson systems of the Stratonovich form \n\\begin{equation*}\n\\left\\lbrace\n\\begin{aligned}\n\\,\\mathrm{d} y(t)&=B(y(t))\\nabla H(y(t))\\,\\,\\mathrm{d} t+\\sum_{k=1}^mB(y(t))\\nabla \\widehat H_k(y(t))\\circ\\,\\,\\mathrm{d} W_k(t)\\\\\ny(0)&=y_0,\n\\end{aligned}\n\\right.\n\\end{equation*}\nwith Hamiltonian functions $H,\\widehat{H}_1,\\ldots,\\widehat{H}_m\\colon\\mathbb{R}^d\\to\\mathbb{R}$, \na structure matrix $B\\colon \\mathbb{R}^d \\to \\mathbb{R}^{d\\times d}$, and independent \nstandard real-valued Wiener processes $W_1,\\ldots,W_m$, see Section~\\ref{sec:poisson} for details on the notation. \n\n\n\n{Stochastic Poisson systems are popular models to describe diverse random phenomena, see below and \\cite{MR629977,misawa94,misawa99,MR2198598,MR2408499,MR2502472,MR3747641,MR2644322,MR2970274,MR3210739,wwc21} for instance. However, to the best of our knowledge, there has been no general study of integrators for stochastic Poisson systems which respect their geometric properties in the literature so far except the recent work~\\cite{hong2020structurepreserving}. In this manuscript, we intend to fill this gap and we study the notion of stochastic Poisson integrators \n(see Definition~\\ref{defPI} and \\cite[Theorem~3.1]{hong2020structurepreserving}): such integrators need to be Poisson maps (see Definition~\\ref{def:Pmap}) and to preserve the Casimir functions of the system. Imposing these conditions is natural: indeed we prove that the flow of the stochastic Poisson system is a Poisson map (see Theorem~\\ref{th:flowP}) and also preserves Casimir functions. In addition, the present notion of stochastic Poisson integrators is a natural generalisation of the notion of Poisson integrators for deterministic Poisson systems.\n\nThe main contribution of this manuscript is the analysis of a class of explicit stochastic Poisson integrators, see equation~\\eqref{slpI}, based on a splitting strategy. The splitting strategy is often applicable for stochastic Lie--Poisson systems, which have a structure matrix $B(y)$ which depends linearly on $y$. The construction of the scheme is illustrated for stochastic perturbations of three systems which have been studied extensively in the deterministic case: Maxwell--Bloch, rigid body and sine--Euler equations. Note that these examples give stochastic differential equations (SDEs) with non-globally Lipschitz drift and diffusion coefficients, thus standard explicit schemes such as the Euler--Maruyama method are not expected to converge (strongly or weakly) or even to satisfy moment bounds. Instead, under appropriate assumptions, we prove that the proposed integrators converge strongly and weakly, with rates $1\/2$ and $1$ respectively, see Theorem~\\ref{thm-general}. Indeed, if one assumes that the system admits a Casimir function with compact level sets (which is the case for the rigid body and the sine--Euler equations), both the exact and the numerical solutions of the stochastic Poisson systems remain bounded almost surely, uniformly with respect to the time step size. Our main convergence result, Theorem~\\ref{thm-general}, is illustrated with extensive numerical experiments.\n}\n\n{\nOn top of that, we study the properties of the stochastic Poisson systems (see Subsection~\\ref{sec:multi}) and stochastic Poisson integrators (see Subsection~\\ref{APsplit}) in a multiscale regime, namely when the Wiener processes are approximated by a smooth noise. The proposed splitting schemes are asymptotic preserving in this diffusion approximation regime, in the sense of the notion recently introduced in~\\cite{BRR}. This property, which is not satisfied by standard integrators, is illustrated with numerical experiments.\n}\n\n\n\n\n\n\n{\nLet us now compare our contributions with existing works. As already mentioned, the notion of stochastic Poisson integrators is a natural generalisation of the notion of Poisson integrators for deterministic systems. In the stochastic case, we are only aware of the recent work~\\cite{hong2020structurepreserving}, where techniques which differ from ours are employed. First, in~\\cite{hong2020structurepreserving}, the proof that the flow is a Poisson map consists in using the Darboux--Lie theorem to rewrite the stochastic Poisson system into a canonical form, i.\\,e. as a stochastic Hamiltonian system, for which it is already known that the flow is a symplectic map. On the contrary, our approach to prove Theorem~\\ref{th:flowP} below is more direct and extends the approach considered in~\\cite[Chapter VII]{HLW02} for the deterministic case. Second, the authors of~\\cite{hong2020structurepreserving} design stochastic Poisson integrators by starting from a stochastic symplectic scheme for the canonical version, and then by coming back to the original variables. Note that the transformations between the non canonical and canonical variables are often found by solving partial differential equations, and that symplectic schemes are usually implicit. Our approach is more direct and leads to explicit splitting schemes. In particular, for the stochastic rigid body system, the scheme proposed in~\\cite{hong2020structurepreserving} is based on the midpoint rule and is thus implicit, whereas the scheme proposed in our work is explicit, and we are able to prove strong and weak convergence results.\n}\n\n\n\n\n\n\n\nThe paper is organized as follows. Section~\\ref{sec:poisson} is devoted to the setting and to the description of the main properties of stochastic Poisson systems, namely the preservation of Casimir functions and the Poisson map property (Theorem~\\ref{th:flowP}, see Subsection~\\ref{sec:Pmap}). The three main examples of stochastic Lie--Poisson systems (Maxwell--Bloch, rigid body and sine--Euler equations) are introduced in Subsection~\\ref{sec:examples}. The diffusion approximation regime is presented in Subsection~\\ref{sec:multi}. Section~\\ref{sec:integrators} presents the main theoretical contributions of this work: we introduce the notion of stochastic Poisson integrators (Definition~\\ref{defPI}) and we propose a class of such integrators using a splitting technique. The main convergence result (Theorem~\\ref{thm-general}) is stated and proved in Subsection~\\ref{sec:cv} (using an auxiliary result proved in Appendix~\\ref{app-auxlem}): under appropriate assumptions, the proposed explicit splitting stochastic Poisson integrators converge in strong and weak senses, with orders $1\/2$ and $1$ respectively. The asymptotic preserving property in the diffusion approximation regime is studied in Section~\\ref{APsplit}. Finally, Section~\\ref{sect-numexp} presents numerical experiments using the proposed splitting stochastic Poisson integrators and variants for the three examples of stochastic Lie--Poisson systems (Maxwell--Bloch, rigid body and sine--Euler equations). We illustrate various qualitative and quantitative properties, which show the superiority of the proposed schemes compared with existing methods.\n\n\n\n\n\n\n\n\\section{Stochastic Poisson and Lie--Poisson systems}\\label{sec:poisson}\n\nIn this section, we set notation and introduce the stochastic differential equations studied in this article, \nnamely \\emph{stochastic (Lie--)Poisson systems}. \nWe then state the main properties of such systems and give several examples\nfor which stochastic Poisson integrators are designed and analysed in Sections~\\ref{sec:integrators}~and~\\ref{sect-numexp}. \nWe conclude this section with a diffusion approximation result justifying why considering stochastic Poisson systems \nwith a Stratonovich interpretation of the noise is relevant.\n\n\\subsection{Setting and stochastic Poisson dynamics}\n\nLet $d,m$ be positive integers: $d$ is the dimension of the considered system and $m$ is the dimension \nof the stochastic perturbation. We study \\emph{stochastic Poisson systems} of the type\n\\begin{equation}\\label{prob}\n\\left\\lbrace\n\\begin{aligned}\n\\,\\mathrm{d} y(t)&=B(y(t))\\nabla H(y(t))\\,\\,\\mathrm{d} t+\\sum_{k=1}^mB(y(t))\\nabla \\widehat H_k(y(t))\\circ\\,\\,\\mathrm{d} W_k(t),\\\\\ny(0)&=y_0,\n\\end{aligned}\n\\right.\n\\end{equation}\nwith \\emph{Hamiltonian functions} \n$H,\\widehat{H}_1,\\ldots,\\widehat{H}_m\\colon\\mathbb{R}^d\\to\\mathbb{R}$, with \\emph{structure matrix} \n$B\\colon \\mathbb{R}^d \\to \\mathbb{R}^{d\\times d}$,\nand with independent standard real-valued Wiener processes $W_1,\\ldots,W_m$ \ndefined on a probability space $(\\Omega, \\mathcal F, \\mathbb P)$. \nThe noise in the SDE~\\eqref{prob} is understood in the Stratonovich sense. \nThe initial value $y_0$ is assumed to be non-random for ease of presentation, but\nthe results of this paper can be extended to the case of random $y_0$ (independent of $W_1,\\ldots,W_m$ and satisfying appropriate moment bounds). \n\nHenceforth we assume at least that $H\\in\\mathcal{C}^2$, $\\widehat{H}_1,\\ldots,\\widehat{H}_m \\in \\mathcal{C}^3$,\nand that $B\\in\\mathcal{C}^2$. The gradient is denoted by $\\nabla$, e.g.\n$\\nabla H(y)=\\bigl(\\frac{\\partial H(y)}{\\partial y_1},\\ldots,\\frac{\\partial H(y)}{\\partial y_d})\\in\\mathbb{R}^d$.\nThe structure matrix $B$ is assumed to satisfy the following properties.\n\\begin{itemize}\n\\item Skew-symmetry: for every $y\\in\\mathbb{R}^d$ and for all $i,j\\in\\{1,\\ldots,d\\}$, one has\n\\[\nB_{ij}(y)=-B_{ji}(y);\n\\]\n\\item Jacobi identity: for every $y\\in\\mathbb{R}^d$ and for all $i,j,k\\in\\{1,\\ldots,d\\}$, one has\n\\[\n\\sum_{\\ell=1}^d\\left( \\frac{\\partial B_{ij}(y)}{\\partial y_\\ell}B_{lk}(y)+\n\\frac{\\partial B_{jk}(y)}{\\partial y_\\ell}B_{li}(y)+\\frac{\\partial B_{ki}(y)}{\\partial y_\\ell}B_{lj}(y) \\right)=0.\n\\]\n\\end{itemize}\nSometimes the structure matrix $B$ is referred to as the Poisson matrix. \nIn many applications the structure matrix $B$ depends linearly on $y$: \nif there is a family of real numbers $\\bigl(b_{ij}^k\\bigr)_{1\\le i,j,k\\le d}$ such that\n\\begin{align}\n\\label{Lie-Poisson.B}\nB_{ij}(y)=\\sum_{k=1}^db_{ji}^ky_k\n\\end{align}\nfor all $y\\in\\mathbb{R}^d$ and $i,j=1,\\ldots,d$, then the system~\\eqref{prob} is called a stochastic \\emph{Lie--Poisson system}. \nExamples are provided below in Section~\\ref{sec:examples}.\nA stochastic \\emph{Hamiltonian} system is obtained if $d$ is even and $B(y)=J^{-1}$ for all $y\\in\\mathbb{R}^d$, \nwhere \n$$J=\\begin{pmatrix} 0 & Id \\\\ -Id & 0\\end{pmatrix}.$$\nIf $ \\widehat{H}_k = 0 $ for all $k=1,\\ldots,m$, then the SDE \\eqref{prob} reduces to a classical deterministic (Lie--)Poisson or Hamiltonian system; cf.~\\cite{HLW02}.\n\nProperties and numerical approximations of stochastic Hamiltonian systems and of deterministic Poisson systems \nhave been extensively studied in the literature, see the references in the introduction. \nThe results presented in this work are generalisations to the above stochastic Poisson case, with a special focus on\nstochastic Lie--Poisson systems.\n\nUnder the previous regularity assumptions, the drift coefficient $y\\mapsto B(y)\\nabla H(y)$ is of class $\\mathcal{C}^1$, and, for all $k=1,\\ldots,m$, \nthe diffusion coefficient $y\\mapsto B(y)\\nabla \\widehat{H}_k(y)$ is of class $\\mathcal{C}^2$. \nAs a consequence, the stochastic differential equation~\\eqref{prob} \nis locally well-posed: for any deterministic initial condition $y_0\\in\\mathbb{R}^d$, there exists a random time $\\tau$, which is almost surely positive, such that~\\eqref{prob} admits a unique solution $t\\in[0,\\tau)\\mapsto y(t)$ with $y(0)=y_0$. Below we will present a criterion to ensure global well-posedness ($\\tau=\\infty$ almost surely for any initial condition $y_0$). This criterion is applied to study the examples presented below.\n\n\n\\subsection{Properties of stochastic Poisson systems}\n\nDeterministic and stochastic Poisson systems have several geometric properties which we discuss in this section.\nLet $\\mathcal{H} \\colon\\mathbb{R}^d\\to\\mathbb{R}$ be a mapping of class $\\mathcal{C}^2$. The evolution of \n$\\mathcal{H}(y)$ along a solution $y(t)$ of the stochastic Poisson system~\\eqref{prob} is described by \n\\begin{align}\n\\label{dH.Poisson.bracket}\n\\,\\mathrm{d}\\mathcal{H}(y(t))=\\{\\mathcal{H},H\\}(y(t))\\,\\mathrm{d} t+\\sum_{k=1}^m \\{\\mathcal{H},\\widehat H_k\\}(y(t))\\circ\\,\\,\\mathrm{d} W_k(t),\n\\end{align}\nwhere the Poisson bracket $\\{\\cdot,\\cdot\\}$ associated with the structure matrix $B$ is defined by\n\\begin{align}\n\\label{Poisson.bracket}\n\\{F,G\\}(y)=\\displaystyle\\sum_{i,j=1}^d\\frac{\\partial F(y)}{\\partial y_i}B_{ij}(y)\\frac{\\partial G(y)}{\\partial y_j}=\\nabla F(y)^T B(y)\\nabla G(y).\n\\end{align}\nThe identity~\\eqref{dH.Poisson.bracket} is proved using the chain rule for solutions of SDEs written in the Stratonovich formulation. The fact that the same structure matrix $B$ appears in both the deterministic and stochastic parts of the system~\\eqref{prob} is important to express~\\eqref{dH.Poisson.bracket} using the Poisson bracket defined by~\\eqref{Poisson.bracket}, which depends only on $B$ but not on the Hamiltonian functions $H,\\widehat H_1,\\ldots,\\widehat H_m$. This assumption on the system~\\eqref{prob} is the key to study the geometric properties of such a system and its numerical discretisation, such as the preservation of Casimir functions or the Poisson map property. Most of the properties stated below would not hold if different structure matrices were considering \nin the stochastic terms. If $\\{\\mathcal{H},H\\}=0$ and if the system is deterministic (i.e. if $ \\widehat{H}_k = 0 $ for all $k=1,\\ldots,m$), then\nthe equality~\\eqref{dH.Poisson.bracket} implies that $t\\mapsto\\mathcal{H}(y(t))$ is constant, i.e. that\n$\\mathcal{H}$ is preserved by the flow of the deterministic Poisson system. Since every smooth Hamiltonian has the property that\n$\\{H,H\\}=0$ this means, in particular, that the flow of a deterministic Poisson systems $\\dot{y}=B(y)\\nabla H(y)$ preserves the Hamiltonian $H$ (see for instance~\\cite[Sect. IV.1 and VII.2]{HLW02}).\nIn the stochastic case, however, the Hamiltonian is in general not preserved. Precisely, Equation~\\eqref{dH.Poisson.bracket} yields the following sufficient condition\n\\[\n\\{\\mathcal{H},H\\}=\\{\\mathcal{H},\\widehat H_1\\}=\\ldots=\\{\\mathcal{H},\\widehat H_m\\}=0\n\\]\nto obtain $\\,\\mathrm{d} \\mathcal{H}(y(t))=0$ and hence preservation of $\\mathcal{H}$ by the flow of the stochastic Poisson system~\\eqref{prob}.\n\nIn addition, deterministic and stochastic Poisson systems may have conserved quantities called Casimir functions.\n\\begin{definition}\nA function $C\\colon\\mathbb{R}^d\\to\\mathbb{R}$ of class $\\mathcal{C}^2$ is called a \\emph{Casimir} function \nof the stochastic Poisson system~\\eqref{prob} if for all $y\\in\\mathbb{R}^d$ one has\n$$\n\\nabla C(y)^TB(y)=0.\n$$\n\\end{definition}\nObserve that the definition of a Casimir function for stochastic and deterministic Poisson systems only depends on the structure matrix $B$, but not on the Hamiltonian functions $H,\\widehat{H}_1,\\ldots,\\widehat{H}_m$.\nA Casimir function $C$ satisfies\n\\[\n\\{C,H\\}=\\{C,\\widehat H_1\\}=\\ldots=\\{C,\\widehat H_m\\}=0,\n\\]\nsee the definition of the Poisson bracket in equation~\\eqref{Poisson.bracket}. As a consequence, owing to~\\eqref{dH.Poisson.bracket}, any Casimir function $C$ is preserved by the flow of the stochastic Poisson system~\\eqref{prob}, i.e. $C(y(t))=C(y_0)$ for all $t\\in[0,\\tau]$, independently of the choice of the Hamiltonian functions $H,\\widehat{H}_1,\\ldots,\\widehat{H}_m$ (since the same structure matrix $B$ appears in both the deterministic and stochastic parts of~\\eqref{prob} \nin order to have preservation of Casimir functions). The preservation of Casimir functions is a desirable feature for a numerical method when applied to the problem~\\eqref{prob}.\n\n\n\n\n\n\nA criterion to ensure global well-posedness of the dynamics can be stated based on the preservation of Casimir functions by solutions of stochastic Poisson systems: it suffices to assume the existence of a Casimir function with compact level sets.\n\\begin{proposition}\\label{propo:global}\nAssume that the stochastic Poisson system~\\eqref{prob} admits a Casimir function $C$ such that \nfor all $c\\in\\mathbb R$ the level sets $\\{y\\in\\mathbb{R}^d\\,\\colon\\, C(y)=c\\}$ are compact. \nThen for any initial condition $y_0\\in\\mathbb{R}^d$ the SDE~\\eqref{prob} admits a unique global solution $\\bigl(y(t)\\bigr)_{t\\ge 0}$, with $y(0)=y_0$, \nsuch that almost surely one has, for all $t\\ge0$, $C(y(t))=C(y_0)$ and \n\\[\n\\|y(t)\\|\\le R(y_0)=\\max_{y\\in\\mathbb R^d, C(y)=C(y_0)}\\|y\\|.\n\\]\n\\end{proposition}\n\n\\begin{proof}\nThe proof of Proposition~\\ref{propo:global} follows from a straightforward truncation argument: \nlet $R=R(y_0)+1$, and introduce mappings $H^R$ of class $\\mathcal{C}^2$ and $\\widehat H_1^R,\\ldots,\\widehat H_m^R:\\mathbb{R}^d\\to\\mathbb{R}$, of class $\\mathcal{C}^3$, with compact support included in the ball $\\{y\\in\\mathbb{R}^d\\,\\colon\\, \\|y\\|\\le R\\}$, and such that $H^R(y)=H(y)$, $\\widehat H_k^R(y)=\\widehat H_k(y)$ for all $y$ with $\\|y\\|\\le R(y_0)$ and $k=1,\\ldots,m$. Set $f^R(y)=B(y)\\nabla H^R(y)$ and $\\hat{f}_k^R(y)=B(y)\\widehat{H}_k^R(y)$ for all $y\\in\\mathbb{R}^d$ and $k=1,\\ldots,m$. Then $f^R$ is globally Lipschitz continuous and, for all $k=1,\\ldots,m$, $\\hat{f}_k^R$ is of class $\\mathcal{C}^2$, bounded and with bounded derivatives. By the standard well-posedness result for SDEs with globally Lipschitz continuous nonlinearities (when written in It\\^o form), the SDE\n\\[\n\\,\\mathrm{d} y^R(t)=f^R(y^R(t))\\,\\,\\mathrm{d} t+\\sum_{k=1}^m\\hat{f}_k^R(y^R(t))\\circ\\,\\,\\mathrm{d} W_k(t)\n\\]\nadmits a unique global solution $\\bigl(y^R(t)\\bigr)_{t\\ge 0}$ with $y^R(0)=y_0$. Due to the discussion above, this solution preserves the Casimir function $C$: $C(y^R(t))=C(y^R(0))=C(y_0)$ for all $t\\ge 0$. Since the level sets of the Casimir function $C$ are assumed to be compact, by the definition of $R(y_0)$, one has $\\|y^R(t)\\|\\le R(y_0)0$. The It\\^o interpretation of the noise is not consistent with this preservation property, whereas the Stratonovich one is, due to the chain rule. As a consequence, the Stratonovich interpretation is the natural candidate for the diffusion approximation limit. Checking rigorously that indeed $y^\\epsilon(t)\\to y(t)$ in distribution requires additional arguments which are omitted in this work.\n\n\\begin{remark}\nLet $(t,y,\\xi_1,\\ldots,\\xi_m)\\mapsto \\varphi^\\epsilon(t,y,\\xi_1,\\ldots,\\xi_m)$ define the flow map associated with~\\eqref{prob_eps}. Then for all $t\\ge 1$, $\\epsilon\\in(0,1)$ and all $\\xi_1,\\ldots,\\xi_m\\in\\mathbb{R}$, the mapping\n\\[\ny\\in \\mathbb{R}^d\\mapsto \\varphi^\\epsilon(t,y,\\xi_1,\\ldots,\\xi_m)\n\\]\nis a Poisson map in the sense of Definition~\\ref{def:Pmap}. This may be proved by modifications of the proof of Theorem~\\ref{th:flowP}, using the chain rule. The details are left to the reader.\n\\end{remark}\n\nThe multiscale system~\\eqref{prob_eps} has components evolving at different time scales: the component $y^\\epsilon$ evolves at a time scale of order ${\\rm O}(1)$, whereas the Ornstein--Uhlenbeck processes $\\xi_1^\\epsilon,\\ldots,\\xi_m^\\epsilon$ evolve at a time scale of order ${\\rm O}(\\epsilon^{-2})$. The definition of effective integrators for the multiscale system~\\eqref{prob_eps}, which avoid prohibitive time step size restrictions of the type $h={\\rm O}(\\epsilon^2)$, and which lead to consistent discretisation of $y(t)$ when $\\epsilon\\to 0$, is a crucial and challenging problem. This question is briefly studied in Section~\\ref{APsplit} below: we define so-called asymptotic preserving schemes (in the spirit of~\\cite{BRR}), \nemploying the preservation of the geometric structure satisfied by the stochastic Poisson integrators introduced in the next section.\n\n\n\n\n\n\n\\section{Stochastic Poisson integrators}\\label{sec:integrators}\n\nIn Section~\\ref{sec:poisson}, we have proved that the flow of a stochastic Poisson system of the type~\\eqref{prob} satisfies two key properties: it is a Poisson map (see Theorem~\\ref{th:flowP}) and it preserves Casimir functions which are associated with the structure matrix $B$. Having the methodology of geometric numerical integration in mind, this motivates us to introduce the concept of a stochastic Poisson integrator for the stochastic Poisson system~\\eqref{prob}, see Definition~\\ref{defPI}. We then present and analyse a general strategy to derive \nefficient stochastic Poisson integrators, based on a splitting technique, which can be implemented easily for some stochastic Lie--Poisson systems. \nWe then proceed with a convergence analysis of the proposed splitting integrators: Theorem~\\ref{thm-general} states that, under appropriate conditions, the scheme has in general strong and weak convergence rates equal to $1\/2$ and $1$, respectively. First, the analysis is performed for an auxiliary problem~\\eqref{auxSDE} with globally Lipschitz continuous nonlinearities, see Lemma~\\ref{lemm-aux}. Second, if the system admits a Casimir function with compact level sets, the auxiliary convergence result is applied to get strong and weak error estimates for the SDE~\\eqref{prob}. Finally, we show that the proposed stochastic Poisson integrators based on a splitting technique satisfy an asymptotic preserving property when considering the multiscale SDE~\\eqref{prob_eps} in the diffusion approximation regime.\n\n\\subsection{Definition and splitting integrators for stochastic (Lie--)Poisson systems}\\label{ssec:LP}\n\nLet us recall that symplectic, respectively Poisson, integrators preserve the key features of deterministic and stochastic Hamiltonian systems, respectively deterministic Poisson systems. Such geometric numerical integrators offer various benefits over \nclassical time integrators in the deterministic setting, see for instance \\cite{HLW02,rl,bcGNI15}. \nWe shall now state the definition and study the properties of stochastic Poisson integrators for stochastic Poisson systems~\\eqref{prob}. On the one hand, this extends the definition and analysis of deterministic Poisson integrators (see~\\cite[Th. 3.1]{hong2020structurepreserving} for another approach). On the other hand, this extends the definition and analysis of stochastic symplectic integrators for stochastic Hamiltonian systems.\n\n\nWe first consider general stochastic Poisson integrators, and then focus the discussion on a class of splitting integrators.\n\n\\subsubsection{Stochastic Poisson integrators}\\label{sssec:PI}\nThe following notation is used below. The time step size is denoted by $h>0$. \nA numerical scheme is defined as follows: for all $n\\ge 1$,\n\\begin{equation}\\label{eq:integrator}\ny^{[n]}=\\Phi_h(y^{[n-1]},\\Delta_n W_1,\\ldots,\\Delta_n W_m),\n\\end{equation}\nwith Wiener increments $\\Delta_nW_k=W_k(nh)-W_k((n-1)h)$, $k=1,\\ldots,m$. The Wiener increments are independent centered real-valued Gaussian random variables with variance $h$. The mapping $\\Phi_h$ is referred to as the integrator.\n\n\\begin{definition}\\label{defPI}\nA numerical scheme~\\eqref{eq:integrator} for the stochastic Poisson system~\\eqref{prob} is called \na \\emph{stochastic Poisson integrator} if\n\\begin{itemize}\n\\item for all $h>0$ and all $\\Delta w_1,\\ldots,\\Delta w_m\\in \\mathbb{R}$, \nthe mapping $$y\\mapsto \\Phi_h(y,\\Delta w_1,\\ldots,\\Delta w_m)$$ is a Poisson map (in the sense of Definition~\\ref{def:Pmap}),\n\\item if $C$ is a Casimir of the stochastic Poisson system~\\eqref{prob}, then $\\Phi_h$ preserves $C$, precisely\n\\[\nC(\\Phi_h(y,\\Delta w_1,\\ldots,\\Delta w_m))=C(y)\n\\]\nfor all $y\\in\\mathbb{R}^d$, $h>0$ and $\\Delta w_1,\\ldots,\\Delta w_m\\in\\mathbb{R}$.\n\\end{itemize}\n\\end{definition}\nAs in the deterministic case, it is seen that standard integrators like the Euler--Maruyama scheme are \nnot (stochastic) Poisson integrators. In addition, it is a difficult task to construct Poisson integrators for the general Poisson systems, see \\cite[Chapter VII.4.2]{HLW02} for deterministic problems. Therefore, the design of stochastic Poisson integrators requires to exploit the special structure for each considered problem. In this article, we focus \non constructing and analyzing stochastic Poisson integrators for stochastic Lie--Poisson systems. More precisely, we propose \nexplicit Poisson integrators for a large class of stochastic \nLie--Poisson systems using a splitting strategy. In Section~\\ref{sect-numexp}, we will exemplify this strategy for three models introduced in Section~\\ref{sec:poisson}: \nthe stochastic Maxwell--Bloch equations (Example~\\ref{expl-MB} and Subsection~\\ref{ssec:PMB}), \nthe stochastic free rigid body equations~\\eqref{srb} (Example~\\ref{expl-SRB} and Subsection~\\ref{ssec:PRB}), \nas well as the stochastic sine--Euler equations (Example~\\ref{expl-SE} and Subsection~\\ref{ssec:PSE}). \n\n\\subsubsection{Splitting integrators for stochastic Poisson systems}\\label{sssec:SP}\n\nWe first propose an abstract splitting integrator for general stochastic Poisson systems~\\eqref{prob}. \nWe then focus on stochastic Lie--Poisson systems~\\eqref{slp} and propose implementable stochastic Poisson integrators for this class of SDEs, which includes \nthe three examples mentioned above.\n\n\nThe key observation made in \\cite[p.3044]{MR1246065} is that a wide class of deterministic Lie--Poisson systems can be split into subsystems which are all linear. This was used in \\cite{MR1246065}\nfor the construction of very efficient geometric integrators for deterministic Lie--Poisson systems.\nInspired by~\\cite{MR1246065}, we propose and analyse efficient explicit Poisson integrators for stochastic Lie--Poisson systems. On an abstract level, our splitting approach is not restricted to Lie--Poisson systems and could also be applied to general stochastic Poisson systems~\\eqref{prob}.\n\n\n\n\n\nLet us consider a stochastic Poisson system of the type~\\eqref{prob}, and assume that the Hamiltonian $H$ can be split as follows:\n\\[\nH=\\sum_{k=1}^{p}H_k.\n\\]\nfor some $p\\ge 1$, where the Hamiltonian functions $H_1,\\ldots,H_p$ have the same regularity as $H$.\n\nTo define the abstract splitting schemes for~\\eqref{prob}, it is convenient to define the flows associated to the subsystems:\n\\begin{itemize}\n\\item for each $k=1,\\ldots,p$, let $(t,y)\\in\\mathbb{R}^+\\times\\mathbb{R}^d\\mapsto \\varphi_k(t,y)$ be the flow associated with the ordinary differential equation $\\dot{y}_k=B(y_k)\\nabla H_k(y_k)$;\n\\item for each $k=1,\\ldots,m$, let $(t,y)\\in\\mathbb{R}\\times\\mathbb{R}^d\\mapsto \\widehat{\\varphi}_k(t,y)$ be the flow associated with the ordinary differential equation $\\dot{y}_k=B(y_k)\\nabla \\widehat H_k(y_k)$.\n\\end{itemize}\nNote that it is sufficient to consider $\\varphi_1(t,\\cdot),\\ldots,\\varphi_p(t,\\cdot)$ for $t\\ge 0$, however the mappings $\\widehat{\\varphi}_1(t,\\cdot),\\ldots,\\widehat{\\varphi}_k(t,\\cdot)$ need to be considered for $t\\in\\mathbb{R}$.\n\nBelow, we shall also use the notation $\\exp(hY_{H_k})=\\varphi_k(h,\\cdot)$ and \n$\\exp(hY_{\\widehat H_k})=\\widehat \\varphi_k(h,\\cdot)$, where $Y_{H_k}=B\\nabla H_k$, resp. $Y_{\\widehat H_k}=B\\nabla\\widehat H_k$, to denote the vector fields of the corresponding differential equations. \nFor the definition of the splitting integrators below, it is essential to note \nthat the exact solution of the Stratonovich stochastic differential equation \n$\\,\\mathrm{d} y_k=B(y_k)\\nabla \\widehat H_k(y_k)\\circ \\,\\mathrm{d} W_k(t)$ is given by \n$y_k(t)=\\widehat \\varphi_k(W_k(t),y_k(0))$.\n\nAs explained above, closed-form expressions for the flows $\\varphi_k$ and $\\widehat\\varphi_k$ are unknown in general but can be obtained for a wide class of stochastic Lie--Poisson systems\n\\begin{equation}\\label{slp}\n\\left\\lbrace\n\\begin{aligned}\n&\\,\\mathrm{d} y(t)=B(y(t))\\nabla H(y(t))\\,\\,\\mathrm{d} t+\nB(y(t))\\sum_{k=1}^m\\nabla \\widehat H_k(y(t))\\circ\\,\\,\\mathrm{d} W_k(t),\\\\\n&B_{ij}(y)=\\sum_{k=1}^db_{ji}^ky_k\\quad\\text{for}\\quad i,j=1,\\ldots,d, \n\\end{aligned}\n\\right.\n\\end{equation}\nwhere the structure matrix $B(y)$ depends linearly on $y$. For the examples of stochastic Lie--Poisson systems introduced in Section~\\ref{sec:examples}, \nbelow we design explicit splitting schemes which can be easily implemented by a splitting strategy. In the sequel, we analyse the geometric and convergence properties of splitting integrators in an abstract framework, where it is not assumed that the flows $\\varphi_k$ and $\\hat{\\varphi}_k$ can be computed exactly. In particular, the assumption that the structure matrix $B$ depends linearly on $y$ is not required in the analysis. Note also that expressions of the flows may also be known for some stochastic Poisson systems which are not Lie--Poisson problems, in which case the abstract analysis would also be applicable.\n\n\n\n\n\nWe are now in position to define splitting integrators for the stochastic Poisson system~\\eqref{prob}, \nwhich will be exemplified in the case of stochastic Lie--Poisson systems~\\eqref{slp}. This general splitting integrator is given by\n\\begin{align}\\label{slpI}\n\\Phi_h(\\cdot)&=\\Phi_h(\\cdot,\\Delta W_1,\\ldots,\\Delta W_m)=\\exp(hY_{H_p})\\circ\\exp(hY_{H_{p-1}})\\circ\\ldots\\circ\\exp(hY_{H_1}) \\nonumber\\\\\n&\\circ\\exp(\\Delta W_mY_{\\widehat H_m})\\circ\\exp(\\Delta W_{m-1}Y_{\\widehat H_{m-1}})\n\\circ\\ldots\\circ\\exp(\\Delta W_1Y_{\\widehat H_1}).\n\\end{align}\n\nIt is immediate to check the following fundamental result.\n\\begin{proposition}\\label{propo:sPi}\nThe splitting integrator~\\eqref{slpI} is a stochastic Poisson integrator, in the sense of Definition~\\ref{defPI}, \nfor the stochastic Poisson system~\\eqref{prob}.\n\\end{proposition}\n\\begin{proof}\nObserve that for any $h>0$ and any real numbers $\\Delta w_1,\\ldots,\\Delta w_m$, the mapping $\\Phi_h(\\cdot,\\Delta w_1,\\ldots,\\Delta w_m)$ is a composition of flow maps $\\varphi_k(h,\\cdot)$, $k=1,\\ldots,p$ and $\\widehat\\varphi_k(\\Delta w_k,\\cdot)$, $k=1,\\ldots,m$.\nOwing to Theorem~\\ref{th:flowP}, all of these flow maps are Poisson maps (since they are flow maps of either deterministic or stochastic Poisson systems).\n\nIn addition, if $C$ is a Casimir function of the stochastic Poisson system~\\eqref{prob}, then $C$ is preserved by each of the flow maps $\\varphi_k(h,\\cdot)$, $k=1,\\ldots,p$ and $\\widehat\\varphi_k(\\Delta w_k,\\cdot)$, $k=1,\\ldots,m$. Indeed, recall that the definition of a Casimir only depends on the structure matrix $B$, and not on the Hamiltonian functions, and all the associated vectors fields are of the type $Y_{H_k}=B\\nabla H_k$ and $Y_{\\widehat H_k}=B\\nabla\\widehat H_k$: the associated flow maps thus preserve $C$.\nAs a consequence, the general splitting integrator $\\Phi_h(\\cdot,\\Delta w_1,\\ldots,\\Delta w_m)$ also preserves the Casimir functions $C$ of the stochastic Poisson system~\\eqref{prob}.\nThis concludes the proof that the splitting scheme~\\eqref{slpI} is a stochastic Poisson integrator.\n\\end{proof}\n\n\n\n\nBefore proceeding to the convergence analysis for the splitting integrators~\\eqref{slpI}, \nit is worth exploiting the fact that they are stochastic Poisson integrators \nto state that the numerical solution remains bounded if the considered stochastic Poisson system~\\eqref{prob} admits a Casimir function $C$ with compact level sets. We refer to Proposition~\\ref{propo:global} for the statement of a similar result for the solution of the stochastic Poisson system~\\eqref{prob}, in particular the assumption on compact level sets.\n\n\\begin{proposition}\\label{propo:numerik}\nAssume that the stochastic Poisson system~\\eqref{prob} admits a Casimir function $C$ which has compact level sets. Consider the stochastic Poisson integrator $y^{[n+1]}=\\Phi_h(y^{[n]})$ \ngiven by \\eqref{slpI}. Then, for any initial condition $y^{[0]}=y_0\\in\\mathbb R^d$, for all $t\\geq0$, \nalmost surely one has the following bound for the numerical solution\n\\[\n\\underset{h>0}\\sup~\\underset{n\\ge 0}\\sup~\\|y^{[n]}\\|\\le R(y^{[0]})=\\max_{y\\in\\mathbb R^d, C(y)=C(y^{[0]})}\\|y\\|.\n\\]\n\\end{proposition}\n\\begin{proof}\nThe splitting scheme~\\eqref{slpI} is a stochastic Poisson integrator (owing to Proposition~\\ref{propo:sPi}), thus it preserves the Casimir function $C$: therefore for all $n\\ge 1$,\n\\[\nC(y^{[n]})=C(y^{[n-1]})=\\ldots=C(y^{[0]}).\n\\]\nNote that $R(y^{[0]})<\\infty$, since by assumption the Casimir function $C$ has compact level sets. Therefore one obtains\n\\[\n\\|y^{[n]}\\|\\le R(y^{[0]})\n\\]\nfor all $n\\ge 0$ by the definition of $R(y^{[0]})$. This concludes the proof.\n\\end{proof}\n\n\n\\begin{remark}\\label{rem-xchange}\nThe stochastic Poisson integrator~\\eqref{slpI} employs a Lie--Trotter splitting strategy. \nChanging the orders of integration of the deterministic and stochastic parts yields the following alternative to~\\eqref{slpI}\n\\begin{align*}\n\\Phi_h(\\cdot)&=\\Phi_h(\\cdot,\\Delta W_1,\\ldots,\\Delta W_m)\\\\\n=&\\exp(\\Delta W_mY_{\\widehat H_m})\\circ\\exp(\\Delta W_{m-1}Y_{\\widehat H_{m-1}})\\circ\\ldots\\circ\\exp(\\Delta W_1Y_{\\widehat H_1})\\\\\n&\\circ\\exp(hY_{H_p})\\circ\\exp(hY_{H_{p-1}})\\circ\\ldots\\circ\\exp(hY_{H_1}).\n\\end{align*}\nThis alternative scheme is also a stochastic Poisson integrator, which satisfies Propositions~\\ref{propo:sPi} and~\\ref{propo:numerik}. The theoretical analysis of that scheme and associated numerical experiments are not reported in the present article.\n\\end{remark}\n\n\\begin{remark}\\label{rem:order2}\nA numerical method of weak order $2$ can be designed using the strategy developed in~\\cite{MR3570281}. The integrator is a combination of three mappings and depends on an additional random variable $\\gamma_n$, uniformly distributed in $\\{-1,1\\}$:\n\\begin{equation}\\label{eq:order2}\ny^{[n]}=\\Phi_{h,\\gamma_n}(y^{[n-1]})=\\Phi_{h\/2}^{det,S}\\circ \\Phi_{h,\\gamma_n}^{sto}(\\cdot,\\Delta_n W_1,\\ldots,\\Delta_n W_m)\\circ \\Phi_{h\/2}^{det,S}(y^{[n-1]}),\n\\end{equation}\nwhere\n\\[\n\\Phi_{h\/2}^{det,S}=\\exp(\\frac{h}{4}Y_{H_{1}})\\circ\\ldots\\circ\\exp(\\frac{h}{4}Y_{H_{p-1}})\\circ\\exp(\\frac{h}{2}Y_{H_p})\\circ\\exp(\\frac{h}{4}Y_{H_{p-1}})\\circ\\ldots\\circ\\exp(\\frac{h}{4}Y_{H_1})\n\\]\nis obtained using a Strang splitting integrator with time step size $h\/2$ for the deterministic part of the equation, and\n\\[\n\\Phi_{h,\\gamma_n}^{sto}(\\cdot,\\Delta_n W_1,\\ldots,\\Delta_n W_m)=\\begin{cases}\n\\exp(\\Delta W_mY_{\\widehat H_m})\\circ\\ldots\\circ\\exp(\\Delta W_1Y_{\\widehat H_1}),\\quad \\gamma_n=1\\\\\n\\exp(\\Delta W_1Y_{\\widehat H_1})\\circ\\ldots\\circ\\exp(\\Delta W_mY_{\\widehat H_{m}}),\\quad \\gamma_n=-1\n\\end{cases}\n\\]\nis obtained using a Lie--Trotter splitting integrator $\\Phi_{h,\\gamma_n}^{sto}$ with time step size $h$ applied to the stochastic part of the equation, where the order of the integration depends on $\\gamma_n$.\n\nIt is straightforward to check that the numerical scheme~\\eqref{eq:order2} is a stochastic Poisson integrator, using the same arguments as in the proof of Proposition~\\ref{propo:sPi}. Numerical experiments which illustrate the behaviour of this scheme and weak convergence with order $2$ will be reported below in Section~\\ref{sect-numexp}. However, we do not give details concerning the theoretical analysis of the scheme~\\eqref{eq:order2}. \n\nWe also refer to~\\cite{MR3927434,MR2409419} for other possible constructions of higher order splitting methods for SDEs. Finally, another possible strategy to design higher order integrators would be to use modified equations, like in~\\cite{MR2970274}.\n\\end{remark}\n\n\n\n\n\n\\subsection{Convergence analysis}\\label{sec:cv}\nThe objective of this section is to prove a general strong and weak convergence result for stochastic Poisson integrators~\\eqref{slpI} defined by the splitting strategy. Note that we assume that the stochastic Poisson system~\\eqref{prob} \nadmits a Casimir function with compact level sets: as explained above, this condition ensures global well-posedness for the continuous problem, and provides almost sure bounds for the exact and numerical solutions (Propositions~\\ref{propo:global} and~\\ref{propo:numerik}). As a consequence, the general convergence result can be applied to get strong and weak convergence rates for the proposed explicit stochastic Poisson integrator~\\eqref{slpI}, when applied to the stochastic rigid body system (Example~\\ref{expl-SRB}) and to the stochastic sine--Euler system (Example~\\ref{expl-SE}), see Theorems~\\ref{thm-srb} and~\\ref{thm-se} below respectively. Note that these two SDEs do not have globally Lipschitz continuous coefficients, so for those examples standard explicit schemes such as the Euler--Maruyama method may fail to converge strongly. The fact that the proposed scheme is a stochastic Poisson integrator is essential to perform the convergence analysis.\nHowever, the general convergence result below cannot be applied to the stochastic Maxwell--Bloch system -- \nthe generalisation of the result to that example is not treated in the present work.\n\n\n\n\n\\begin{theorem}\\label{thm-general}\nAssume that the stochastic Poisson system~\\eqref{prob} admits a Casimir function with compact level sets.\n\n\n\n\\begin{itemize}\n\\item[Strong convergence] Assume that $B$ is of class $\\mathcal{C}^2$, that the mappings $H_1,\\ldots,H_p$ are of class $\\mathcal{C}^2$, and that the mappings $\\widehat{H}_1,\\ldots,\\widehat{H}_m$ are of class $\\mathcal{C}^3$. \nThen the stochastic Poisson integrator~\\eqref{slpI} has strong order of convergence equal to $1\/2$: for all $T\\in(0,\\infty)$ and all $y_0\\in\\mathbb{R}^d$, there exists a real number $c(T,y_0)\\in(0,\\infty)$ such that\n\\[\n\\underset{0\\le n\\le N}\\sup~\\left(\\mathbb E\\left[ \\norm{ y\\left(nh\\right)-y^{[n]} }^2 \\right] \\right)^{1\/2}\\le c(T,y_0)h^{\\frac12},\n\\]\nwith time step size $h=T\/N$, and $y^{[0]}=y_0=y(0)$. \n\nIf $m=1$, then the strong order of convergence is equal to $1$.\n\n\n\\item[Weak convergence] Assume that $B$ is of class $\\mathcal{C}^5$, that the mappings $H_1,\\ldots,H_p$ are of class $\\mathcal{C}^5$, and that the mappings $\\widehat{H}_1,\\ldots,\\widehat{H}_m$ are of class $\\mathcal{C}^6$. Then the stochastic Poisson integrator~\\eqref{slpI} has weak order of convergence equal to $1$: for all $T\\in(0,\\infty)$ and all $y_0\\in\\mathbb{R}^d$, and any test function $\\phi\\colon\\mathbb{R}^d\\to\\mathbb{R}$ of class $\\mathcal{C}^4$ with bounded derivatives, there exists a real number $c(T,y_0,\\phi)\\in(0,\\infty)$ such that\n\\[\n\\underset{0\\le n\\le N}\\sup~\\left|\\mathbb E\\left[\\phi\\left(y\\left(nh\\right)\\right)\\right]-\\mathbb E\\left[\\phi\\left(y^{[n]}\\right)\\right]\\right|\\leq c(T,y_0,\\phi)h.\n\\]\n\\end{itemize}\n\\end{theorem}\n\nThe convergence theorem stated above concerning the strong and weak rates of convergence of the stochastic Poisson integrator~\\eqref{slpI} applied to the stochastic Poisson system~\\eqref{prob} is an immediate consequence of the following auxiliary result, which is stated for a general SDE of the type\n\\begin{equation}\\label{auxSDE}\n\\,\\mathrm{d} z(t)=\\sum_{k=1}^{p}f_k(z(t))\\,\\,\\mathrm{d} t+\\sum_{k=1}^m\\widehat f_k(z(t))\\circ\\,\\,\\mathrm{d} W_k(t),\n\\end{equation}\nwith functions $f_k$ and $\\widehat{f}_k$ which are globally Lipschitz continuous.\n\n\\begin{lemma}\\label{lemm-aux}\nConsider the auxiliary splitting scheme\n\\begin{equation}\\label{auxscheme}\nz^{[n]}=\\varphi_p(h,\\cdot)\\circ\\ldots\\circ \\varphi_1(h,\\cdot)\\circ\\widehat\\varphi_m(\\Delta W_m^n,\\cdot)\\circ \\ldots\\circ \\widehat\\varphi_1(\\Delta W_1^n,\\cdot)(z^{[n-1]}),\n\\end{equation}\nwith $z^{[0]}=z_0=z(0)$, associated with the auxiliary SDE~\\eqref{auxSDE}, where $\\varphi_k$ is the flow associated with the ODE $\\dot{z}_k=f_k(z_k)$, $k=1,\\ldots,p$, and $\\widehat \\varphi_k$ is the flow associated with the ODE $\\dot{z}_k=\\widehat f_k(z_k)$.\n\n\n\\begin{itemize}\n\\item[Strong convergence] Assume that the mappings $f_1,\\ldots,f_p$ are of class $\\mathcal{C}^1$ with bounded derivatives, and that the mappings $\\widehat{f}_1,\\ldots,\\widehat{f}_m$ are bounded and of class $\\mathcal{C}^2$ with bounded first and second order derivatives. Then the auxiliary scheme~\\eqref{auxscheme} has strong order of convergence equal to $1\/2$: for all $T\\in(0,\\infty)$ and all $z_0\\in\\mathbb{R}^d$, there exists a real number $c(T,z_0)\\in(0,\\infty)$ such that\n\\begin{equation}\\label{eq:strongaux}\n\\underset{0\\le n\\le N}\\sup~\\left(\\mathbb E\\left[ \\norm{ z\\left(nh\\right)-z^{[n]} }^2 \\right] \\right)^{1\/2}\\le c(T,z_0)h^{\\frac12}.\n\\end{equation}\n\nIn the commutative noise case, {\\it i.e.} if $\\widehat{f}_k'(z)\\widehat{f}_\\ell(z)=\\widehat{f}_\\ell'(z)\\widehat{f}_k(z)$ for all $k,\\ell=1,\\ldots,m$, the strong order of convergence is equal to $1$.\n\n\\item[Weak convergence] Assume that the mappings $f_1,\\ldots,f_p$ are of class $\\mathcal{C}^4$ with bounded derivatives, and that the mappings $\\widehat{f}_1,\\ldots,\\widehat{f}_m$ are bounded and of class $\\mathcal{C}^5$ with bounded first and second order derivatives. Then the auxiliary scheme~\\eqref{auxscheme} has weak order of convergence equal to $1$: for all $T\\in(0,\\infty)$ and all $z_0\\in\\mathbb{R}^d$, and any test function $\\phi:\\mathbb{R}^d\\to\\mathbb{R}$ of class $\\mathcal{C}^4$, there exists a real number $c(T,z_0,\\phi)\\in(0,\\infty)$ such that\n\\begin{equation}\\label{eq:weakaux}\n\\underset{0\\le n\\le N}\\sup~\\left|\\mathbb E\\left[\\phi\\left(z\\left(nh\\right)\\right)\\right]-\\mathbb E\\left[\\phi\\left(z^{[n]}\\right)\\right]\\right|\\leq c(T,z_0,\\phi)h.\n\\end{equation}\n\\end{itemize}\n\\end{lemma}\nThe proof of Lemma~\\ref{lemm-aux} is postponed to Appendix~\\ref{app-auxlem}. Let us now check how Theorem~\\ref{thm-general} is a straightforward corollary of Lemma~\\ref{lemm-aux}. Note that if $m=1$, the commutative noise case condition is satisfied.\n\n\n\\begin{proof}[Proof of Theorem~\\ref{thm-general}]\nOwing to Propositions~\\ref{propo:global} and~\\ref{propo:numerik}, the exact and numerical solutions of the SDE~\\eqref{prob}, resp. scheme~\\eqref{slpI}, \nsatisfy the almost sure bounds\n\\[\n\\underset{t\\in[0,T]}\\sup~\\norm{y(t)}\\le R(y_0),\\quad \\underset{N\\ge 1}\\sup~\\underset{0\\le n\\le N}\\sup~\\norm{y^{[n]}}\\le R(y_0),\n\\]\nwhere $R(y_0)=\\max_{y\\in\\mathbb R^d, C(y)=C(y_0)}\\|y\\|$, and $R(y_0)<\\infty$ since $C$ has compact level sets by assumption.\n\nUsing the same construction as in the proof of Proposition~\\ref{propo:global}, one can define compactly supported functions $f_k$ and $\\widehat{f}_k$, such that $f_k(y)=B(y)\\nabla H_k(y)$ and $\\widehat{f}_k(y)=B(y)\\nabla \\widehat H_k(y)$ for all $y\\in \\mathbb{R}^d$ such that $\\|y\\|\\le R(y_0)$. \nIn addition, $f_k$ is at least of class $\\mathcal{C}^1$ and $\\widehat{f}_k$ is at least of class $\\mathcal{C}^2$.\n\nNote that with this choice, $y(t)=z(t)$ and $y^{[n]}=z^{[n]}$ for all $t\\in[0,T]$ and all $n\\in\\{0,\\ldots,N\\}$, where $\\bigl(z(t)\\bigr)_{t\\ge 0}$ is the solution of the auxiliary SDE~\\eqref{auxSDE} and $\\bigl(z^{[n]}\\bigr)_{n\\ge 0}$ is obtained by the auxiliary scheme~\\eqref{auxscheme}. It remains to apply Lemma~\\ref{lemm-aux} to conclude. Note also that it is not necessary to assume that the functions $\\phi$, $B$, $H_1,\\ldots,H_p$, $\\widehat H_1,\\ldots,\\widehat H_m$ and their derivatives are bounded. This is due to the boundedness of the exact and numerical solutions provided by the preservation of the Casimir function $C$ and the compact level sets assumption.\n\\end{proof}\n\n\\begin{remark}\\label{rem-consistent}\nIf one considers the following variant of the stochastic Poisson system~\\eqref{prob}\n\\[\n\\,\\mathrm{d} y(t)=B(y(t))\\nabla H(y(t))\\,\\,\\mathrm{d} t+\\sum_{k=1}^mB(y(t))\\nabla \\widehat H_k(y(t))\\circ\\,\\,\\mathrm{d} W(t)\n\\]\ndriven by a single Wiener process $W$ (that is $W_1=\\ldots=W_m=W$), the associated variant of the proposed stochastic Poisson \nintegrator~\\eqref{slpI} reads\n\\begin{align*}\n\\Phi_h&=\\exp(hY_{H_p})\\circ\\exp(hY_{H_{p-1}})\\circ\\ldots\\circ\\exp(hY_{H_1}) \\nonumber\\\\\n&\\circ\\exp(\\Delta WY_{\\widehat H_m})\\circ\\exp(\\Delta WY_{\\widehat H_{m-1}})\n\\circ\\ldots\\circ\\exp(\\Delta WY_{\\widehat H_1}).\n\\end{align*}\nThis scheme is not consistent when $m\\ge 2$. In the proof of the convergence result Theorem~\\ref{thm-general}, more precisely in the proof of Lemma~\\ref{lemm-aux}, the independence of the Wiener processes $W_1,\\ldots,W_m$ plays a crucial role.\n\\end{remark}\n\n\n\n\n\n\n\n\n\n\n\\subsection{Asymptotic preserving schemes in the diffusion approximation regime}\\label{APsplit}\n\n\nThe objective of this section is to use the proposed splitting stochastic Poisson integrators~\\eqref{slpI} in order to define effective numerical schemes for the discretisation of the multiscale system~\\eqref{prob_eps} described in Section~\\ref{sec:multi}. The challenge is to obtain a good behaviour of the numerical scheme when $\\epsilon\\to 0$. On the one hand, one needs to avoid time step size restrictions of the type $h={\\rm O}(\\epsilon)$ or $h={\\rm O}(\\epsilon^2)$, which would be prohibitive when $\\epsilon$ is small. On the other hand, it would be desirable to have a convergence (in distribution) of the type $y^{\\epsilon,[n]}\\underset{\\epsilon\\to 0}\\to y^{[n]}$, for all fixed $h>0$ and $n\\ge 1$, to reproduce the diffusion approximation result Proposition~\\ref{propo:multi} at the discrete time level. Indeed, if the two requirements above are satisfied, the integrator can be used to approximate both~\\eqref{prob} and~\\eqref{prob_eps}, without the need to adapt the time step size $h$ when $\\epsilon$ vanishes.\n\nThe class of numerical methods which satisfy the two requirements above is known as \\emph{asymptotic preserving} schemes. We refer to the recent work~\\cite{BRR} where asymptotic preserving schemes were introduced for a class of stochastic differential equations of the type~\\eqref{prob_eps}. Note that a standard Euler--Maruyama scheme does not satisfy the asymptotic preserving property. Recall that for this notion of asymptotic preserving schemes, the convergence is understood in the sense of convergence in distribution of random variables. Using the splitting strategy allows us to design other examples of asymptotic preserving schemes for~\\eqref{prob_eps}, such that the corresponding limit scheme (obtained when $\\epsilon\\to 0$ with fixed time step size $h>0$) is the splitting stochastic Poisson integrator~\\eqref{slpI}.\n\n\nWe propose the following integrator for the multiscale system~\\eqref{prob_eps}: for any $\\epsilon\\in(0,1)$ and any time step size $h>0$, for all $n\\ge 1$, set\n\\begin{align}\\label{APscheme}\ny^{\\epsilon,[n]}&=\\exp(hY_{H_p})\\circ\\ldots\\circ\\exp(hY_{H_1}) \\nonumber\\\\\n&\\circ\\exp\\left(\\frac{h\\xi_{m}^{\\epsilon,[n]}}{\\epsilon}Y_{\\widehat H_m}\\right)\\circ\\ldots\n\\circ\\exp\\left(\\frac{h\\xi_{1}^{\\epsilon,[n]}}{\\epsilon}Y_{\\widehat H_1}\\right)(y^{\\epsilon,[n-1]}),\n\\end{align}\nwhere, for each $k=1,\\ldots,m$, the Ornstein--Uhlenbeck process $\\xi_k^\\epsilon$ is discretised using the linear implicit Euler scheme\n\\[\n\\xi_k^{\\epsilon,[n]}=\\xi_{k}^{\\epsilon,[n-1]}-\\frac{h}{\\epsilon^2}\\xi_{k}^{\\epsilon,[n]}+\\frac{\\Delta_n W_k}{\\epsilon}=\\frac{1}{1+\\frac{h}{\\epsilon^2}}\\Bigl(\\xi_k^{\\epsilon,[n-1]}+\\frac{\\Delta_n W_k}{\\epsilon}\\Bigr).\n\\]\nNote that $C(y^{\\epsilon,[n]})=C(y^{\\epsilon,[0]})$ for all $n\\ge 0$, if $C$ is a Casimir function of the stochastic Poisson system~\\eqref{prob}. If $C$ has compact level sets, this yields the following variant of the bound of Proposition~\\ref{propo:numerik},\n\\[\n\\underset{\\epsilon\\in(0,1)}\\sup~\\underset{h>0}\\sup~\\underset{n\\ge 0}\\sup~\\|y^{[n]}\\|\\le R(y^{[0]})=\\max_{y\\in\\mathbb R^d, C(y)=C(y^{[0]})}\\|y\\|,\n\\]\nwhich is uniform over $\\epsilon$.\n\nObserve that for all $n\\ge 1$ and $h>0$, one has\n\\[\n\\frac{h\\xi_{k}^{\\epsilon,[n]}}{\\epsilon}=\\Delta_nW_k+\\epsilon\\bigl(\\xi_k^{\\epsilon,[n-1]}-\\xi_k^{\\epsilon,[n]}\\bigr)\\underset{\\epsilon\\to 0}\\to \\Delta_nW_k.\n\\]\n\nBy a recursion argument, it is then straightforward to check that\n\\[\ny^{\\epsilon,[n]}\\underset{\\epsilon\\to 0}\\to y^{[n]},\n\\]\nfor all $n\\ge 0$ and for all fixed $h>0$, where $y^{[n]}$ is given by the splitting scheme~\\eqref{slpI}. \nAs a consequence, the scheme~\\eqref{APscheme} is an asymptotic preserving scheme, in the sense of~\\cite{BRR}: the following diagram commutes\n\\[\n\\begin{CD}\ny^{\\epsilon,[N]} @>{N\\to\\infty}>> y^\\epsilon(T) \\\\\n@VV{\\epsilon\\to 0}V @VV{\\epsilon\\to 0}V\\\\\ny^{[N]} @>{\\Delta t\\to 0}>> y(T)\n\\end{CD}\n\\]\nwhen the time step size is given by $h=T\/N$. In other words:\n\\begin{itemize}\n\\item for each fixed $\\epsilon>0$, the scheme~\\eqref{APscheme} is consistent with~\\eqref{prob_eps} when $h\\to 0$,\n\\item for each fixed $h>0$, the proposed scheme~\\eqref{APscheme} converges to a limiting scheme when $\\epsilon\\to 0$, which is given here by \nthe abstract splitting scheme~~\\eqref{slpI},\n\\item the limiting scheme~\\eqref{slpI} is consistent when $h\\to 0$ with~\\eqref{prob}, which is the limit when $\\epsilon\\to 0$ of~\\eqref{prob_eps}.\n\\end{itemize}\nWe refer to the recent work~\\cite{BRR} for a general analysis of asymptotic preserving schemes for stochastic differential equations. As explained in~\\cite{BRR}, the construction of asymptotic preserving schemes for SDEs, in particular to obtain equations interpreted in the Stratonovich sense, may be subtle. Here we do not employ a predictor-corrector strategy as in~\\cite{BRR} (which is used to get the Stratonovich interpretation instead of the It\\^o one), \nsince we directly use exact flows of the appropriate subsystems in the splitting procedure: in the present paper, the Stratonovich interpretation is obtained in a natural way.\n\nThe property of being asymptotic preserving is a qualitative property of a numerical scheme. Let us now briefly discuss the behaviour of weak error estimates of the asymptotic preserving scheme~\\eqref{APscheme} when $\\epsilon$ is small. For each fixed $\\epsilon>0$, it is expected that the proposed asymptotic preserving scheme~\\eqref{APscheme} has a weak order of convergence equal to $1$ in general: for test functions $\\phi\\colon\\mathbb{R}^d\\to\\mathbb{R}$ of class $\\mathcal{C}^4$, one has\n\\[\n\\left|\\mathbb E[\\varphi(y^{\\epsilon,[N]})]-\\mathbb E[\\varphi(y^\\epsilon(T))]\\right|\\le c_\\epsilon(T,\\varphi)h,\n\\]\nwhere $h=\\frac{T}{N}$ and the real number $c_\\epsilon(T,\\varphi)$ may depend on $\\epsilon$ and diverge when $\\epsilon\\to 0$. \nIn order to have a computational cost independent of the parameter $\\epsilon$, it would be desirable to establish that the proposed scheme is \\emph{uniformly accurate}: one would need to prove error estimates of the type\n\\[\n\\underset{\\epsilon\\in(0,\\epsilon_0)}\\sup~\\big|\\mathbb E[\\varphi(y^{\\epsilon,[N]})]-\\mathbb E[\\varphi(y^\\epsilon(T))]\\big|\\le c(T,\\varphi)h^\\alpha,\n\\]\nwhich are uniform with respect to $\\epsilon\\in(0,\\epsilon_0)$ (with arbitrary $\\epsilon_0>0$), in other words $c(T,\\varphi)$ is independent of $\\epsilon$. \nObserve that a reduction of the order of convergence, namely $\\alpha<1$, may happen. \nProving the uniform accuracy property of the scheme~\\eqref{APscheme} is beyond the scope of this work. \nHowever, in the numerical experiments reported below, we investigate whether such uniform weak error estimates hold for the considered problems. \n\n\n\\begin{remark}\nIt is possible to define a variant of the asymptotic preserving scheme~\\eqref{APscheme}, using a midpoint approximation for the the Ornstein--Ulenbeck components:\n\\[\n\\xi_k^{\\epsilon,[n]}=\\xi_{k}^{\\epsilon,[n-1]}-\\frac{h}{2\\epsilon^2}\\left(\\xi_k^{\\epsilon,[n-1]}+\\xi_{k}^{\\epsilon,[n]}\\right)+\\frac{\\Delta_n W_k}{\\epsilon},\n\\]\nin which case the definition of $y^{\\epsilon,[n]}$ needs to be modified as follows:\n\\begin{align*}\ny^{\\epsilon,[n]}&=\\exp(hY_{H_p})\\circ\\exp(hY_{H_{p-1}})\\circ\\ldots\\circ\\exp(hY_{H_1}) \\nonumber\\\\\n&\\circ\\exp\\left(\\frac{h(\\xi_k^{\\epsilon,[n-1]}+\\xi_{k}^{\\epsilon,[n]})}{2\\epsilon}Y_{\\widehat H_m}\\right)\\circ\\ldots\\circ\\exp\\left(\\frac{h(\\xi_k^{\\epsilon,[n-1]}+\\xi_{k}^{\\epsilon,[n]})}{2\\epsilon}Y_{\\widehat H_1})(y^{\\epsilon,[n-1]}\\right).\n\\end{align*}\nThat scheme is also asymptotic preserving.\n\\end{remark}\n\n\n\\section{Numerical experiments}\\label{sect-numexp}\n\nIn this section, we illustrate the behaviour of the stochastic Poisson integrators which have been proposed and analysed in Section~\\ref{sec:integrators}. \nWe choose to present numerical experiments for the three examples of stochastic Lie--Poisson systems introduced in Section~\\ref{sec:poisson}. \nOn the one hand, we illustrate the qualitative properties of the proposed splitting stochastic Poisson integrators, compared with standard methods, \nby considering the temporal evolution of Casimir functions. On the other hand, we investigate and state strong and weak orders of convergence (which are consequences of Theorem~\\ref{thm-general}), \nand we illustrate the quantitative error estimates obtained above. In addition, we illustrate the asymptotic preserving property for the multiscale versions of the considered systems. Note that, in general, the theoretical convergence results cannot be \napplied to the stochastic Maxwell--Bloch system (Example~\\ref{expl-MB}), \nsince no Casimir functions with compact level sets is known for that example. \nHowever, the theoretical results can be applied to the stochastic rigid body system (Example~\\ref{expl-SRB}) and to the stochastic sine--Euler system (Example~\\ref{expl-SE}). \n\n\\subsection{Explicit stochastic Poisson integrators for stochastic Maxwell--Bloch equations}\\label{ssec:PMB} \n\nThis subsection presents explicit stochastic Poisson integrators for the stochastic Maxwell--Bloch system~\\eqref{smb} (Example~\\ref{expl-MB}). We first give a detailed construction of the splitting scheme, which gives a stochastic Poisson integrator. We then illustrate its qualitative properties (preservation of the Casimir function) and strong and weak error estimates of the proposed scheme by numerical experiments. Finally, we illustrate the asymptotic preserving property (Section~\\ref{APsplit}) for a multiscale version of the system.\n\n\n\\subsubsection{Presentation of the splitting scheme for the stochastic Maxwell--Bloch system}\n\nRecall that the stochastic Maxwell--Bloch system~\\eqref{smb} introduced in Example~\\ref{expl-MB} is of the type\n\\begin{equation*}\n\\,\\mathrm{d} y=B(y)\\left(\\nabla H(y)\\,\\,\\mathrm{d} t+\\sigma_1\\nabla \\widehat H_1(y)\\circ\\,\\,\\mathrm{d} W_1(t)+\\sigma_3\\nabla \\widehat H_3(y)\\circ\\,\\,\\mathrm{d} W_3(t) \\right).\n\\end{equation*}\n\nTo apply the strategy described in Section~\\ref{ssec:LP} and construct explicit stochastic Poisson integrators, we follow the approach from~\\cite{MR1702129} for the deterministic Maxwell--Bloch system. The Hamiltonian function $H$ is split as $H=H_1+H_3$, with $H_1(y)=\\widehat H_1(y)=\\frac12y_1^2$ and $H_3(y)=\\widehat H_3(y)=y_3$. The two associated deterministic subsystems can be solved exactly as follows. \nOn the one hand, the deterministic subsystem corresponding with the vector field $Y_{H_1}=B\\nabla H_1$ is given by\n\\begin{equation*}\n\\left\\lbrace\n\\begin{aligned}\n\\dot y_1&=0\\\\\n\\dot y_2&=y_3y_1\\\\\n\\dot y_3&=-y_2y_1.\n\\end{aligned}\n\\right.\n\\end{equation*}\nObserve that $y_1$ may be considered as a constant and thus $(y_2,y_3)$ is solution of a linear ordinary differential equation (it is the standard harmonic oscillator): the exact solution of the first subsystem is thus given by\n\\begin{equation*}\n\\exp(tY_{H_1})y(0)\n=\n\\begin{pmatrix}1 & 0 & 0\\\\ 0 & \\cos(y_1(0)t) & \\sin(y_1(0)t)\\\\ 0 & -\\sin(y_1(0)t) & \\cos(y_1(0)t)\\end{pmatrix}y(0)\n\\end{equation*}\nfor all $t\\in\\mathbb{R}$ and $y(0)\\in\\mathbb{R}^3$.\n\nOn the other hand, the deterministic subsystem corresponding with the vector field $Y_{H_3}=B\\nabla H_3$ is given by\n\\begin{equation*}\n\\left\\lbrace\n\\begin{aligned}\n\\dot y_1&=y_2\\\\\n\\dot y_2&=0\\\\\n\\dot y_3&=0.\n\\end{aligned}\n\\right.\n\\end{equation*}\nThe exact solution of the second subsystem is thus given by\n\\begin{equation*}\n\\exp(tY_{H_3})y(0)\n=\n\\begin{pmatrix}1 & t & 0\\\\ 0 & 1 & 0\\\\ 0 & 0 & 1\\end{pmatrix}y(0)\n\\end{equation*}\nfor all $t\\in\\mathbb{R}$ and $y(0)\\in\\mathbb{R}^3$.\n\nIn the case of the stochastic Maxwell--Bloch system~\\eqref{smb}, the splitting integrator~\\eqref{slpI} then reads\n\\begin{equation}\\label{smbI}\n\\Phi_h=\\exp(hY_{H_3})\\circ\\exp(hY_{H_1})\\circ\\exp(\\sigma_3\\Delta W_3Y_{\\widehat H_3})\\circ\\exp(\\sigma_1\\Delta W_1Y_{\\widehat H_1}),\n\\end{equation} \nwhere for all $y\\in\\mathbb{R}^3$ one has\n\\[\n\\exp(\\sigma_1\\Delta W_1Y_{\\widehat H_1})y\n=\n\\begin{pmatrix}1 & 0 & 0\\\\ 0 & \\cos(y_1\\sigma_1\\Delta W_1) & \\sin(y_1\\sigma_1\\Delta W_1)\\\\ 0 & -\\sin(y_1\\sigma_1\\Delta W_1) & \\cos(y_1\\sigma_1\\Delta W_1)\\end{pmatrix}y\n\\]\nand\n\\[\n\\exp(\\sigma_3\\Delta W_3Y_{\\widehat H_3})y=\\begin{pmatrix}1 & \\sigma_3\\Delta W_3 & 0\\\\ 0 & 1 & 0\\\\ 0 & 0 & 1\\end{pmatrix}y.\n\\]\nOwing to Proposition~\\ref{propo:sPi}, the explicit splitting scheme~\\eqref{smbI} is a stochastic Poisson integrator.\n\n\n\n\n\\subsubsection{Preservation of the Casimir of the stochastic Maxwell--Bloch system}\n\nLet us first illustrate the qualitative behaviour of the stochastic Poisson integrator~\\eqref{smbI} introduced above. Figure~\\ref{fig:CasMaxBloc} illustrates the preservation of the Casimir $C(y)=\\frac{1}{2}(y_2^2+y_3^2)$ by the stochastic Poisson integrator~\\eqref{smbI}. In this numerical experiment, the initial value is $y(0)=(1,2,3)$ and the final time is $T=1$. We consider the two cases where the system~\\eqref{smb} is driven by a single Wiener process: $(\\sigma_1,\\sigma_3)=(1,0)$ and $(\\sigma_1,\\sigma_3)=(0,1)$. Similar results would be obtained if the system was driven by two independent Wiener processes ($\\sigma_1=\\sigma_3=1$ for instance). In Figure~\\ref{fig:CasMaxBloc}, we compare the numerical solutions given by the classical Euler--Maruyama scheme (applied to the It\\^o formulation of the system), the stochastic midpoint scheme from~\\cite{Milstein2002a}, and the explicit splitting scheme~\\eqref{smbI}. The time step size is equal to $h=0.01$. To implement the implicit stochastic midpoint scheme, a truncation of the noise with threshold $A=\\sqrt{4|\\log(h)|}$ is applied (see~\\cite{Milstein2002a} for details). To be able to compare the results for different schemes, we use this truncation in all experiments where the implicit stochastic midpoint scheme is involved. As shown in Proposition~\\ref{propo:sPi}, we observe that the Casimir function $C(y)=\\frac12(y_2^2+y_3^2)$ is preserved when using the stochastic Poisson integrator~\\eqref{smbI}. The Casimir function is also preserved when using the stochastic midpoint scheme: indeed, this integrator is known to preserve quadratic invariants, see~\\cite{MR3210739}.\n\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics*[width=0.48\\textwidth,keepaspectratio]{newCasMaxBloc}\n\\includegraphics*[width=0.48\\textwidth,keepaspectratio]{rev-casMaxBloc}\n\\caption{Stochastic Maxwell--Bloch system: preservation of the Casimir by\nthe Euler--Maruyama scheme ($\\times$), the midpoint scheme ($\\circ$), \nand the explicit stochastic Poisson integrator ($\\diamond$). Left: $(\\sigma_1,\\sigma_3)=(1,0)$. Right: $(\\sigma_1,\\sigma_3)=(0,1)$.}\n\\label{fig:CasMaxBloc}\n\\end{figure}\n\n\n\n\\subsubsection{Strong and weak convergence of the explicit stochastic Poisson integrator for the stochastic Maxwell--Bloch system}\n\nThe preservation of the Casimir $C$ is not sufficient to ensure almost sure boundedness of the numerical solution, which is instrumental to deduce Theorem~\\ref{thm-general} from Lemma~\\ref{lemm-aux}. As a consequence, we are not able to state a convergence result for the stochastic Poisson integrator~\\eqref{smbI} in general. However, when $\\sigma_3=0$, it is possible to show the following result.\n\\begin{proposition}\\label{thm-smb}\nConsider a numerical discretisation of the stochastic Maxwell--Bloch system~\\eqref{smb} by the stochastic Poisson integrator~\\eqref{smbI}. Assume that $\\sigma_3=0$. Then, the strong order of convergence and the weak order of convergence of this scheme are equal to $1$.\n\\end{proposition}\n\n\\begin{proof}\nLet us prove that, when $\\sigma_3=0$, the following bounds are satisfied almost surely:\n\\begin{equation}\\label{boundsMB}\n\\left\\lbrace\n\\begin{aligned}\n\\|y^{[n]}\\|&\\le (1+h)^n\\|y^{[0]}\\|\\\\\n\\|y(t)\\|&\\le e^{t}\\|y(0)\\|\n\\end{aligned}\n\\right.\n\\end{equation}\nfor all $n\\ge 0$, $h\\in(0,h_0)$ and $t\\ge 0$.\n\nThe proof of the bounds above is straightforward. On the one hand, for all $y\\in\\mathbb{R}^3$, all $h>0$ and $t\\in\\mathbb{R}$, one has\n\\[\n\\|e^{tY_{H_1}}y\\|=\\|e^{tY_{\\widehat H_1}}y\\|=\\|y\\|\n\\]\nand\n\\begin{align*}\n\\|e^{hY_{H_3}}y\\|^2&=(y_1+hy_2)^2+y_2^2+y_3^2=\\|y\\|^2+2hy_1y_2+h^2y_2^2\\\\\n&\\le (1+h)^2\\|y\\|^2.\n\\end{align*}\nTherefore\n\\[\n\\|y^{[n]}\\|\\le \\|e^{hY_{H_3}}\\circ e^{hY_{H_1}}\\circ e^{\\sigma_1\\Delta_n W_1 Y_{\\widehat H_1}}(y^{[n-1]})\n\\|\n\\le (1+h)\\|y^{[n-1]}\\|\n\\]\nthus $\\|y^{[n]}\\|\\le (1+h)^n\\|y^{[0]}\\|$.\n\nOn the other hand, let $\\mathcal{H}(y)=\\|y\\|^2$. Then one has $\\{\\mathcal{H},H_1\\}=\\{\\mathcal{H},\\widehat H_1\\}=0$ and $\\{\\mathcal{H},H_3\\}(y)=2y_1y_2\\le \\mathcal{H}(y)$ for all $y\\in\\mathbb{R}^3$ (recall that the Poisson bracket is defined by~\\eqref{Poisson.bracket}). Using~\\eqref{dH.Poisson.bracket}, one thus obtains $\\,\\mathrm{d} \\mathcal{H}(y(t))\\le \\mathcal{H}(y(t))$ for all $t\\ge 0$ and the bound for the exact solution follows from Gronwall's lemma.\n\nLet $T\\in(0,\\infty)$, one can then repeat the arguments used in the proof of Theorem~\\ref{thm-general} as a corollary of Lemma~\\ref{lemm-aux}, using the almost sure bounds\n\\[\n\\underset{t\\in[0,T]}\\sup~\\norm{y(t)}\\le R(y_0,T),\\quad \\underset{N\\ge 1}\\sup~\\underset{0\\le n\\le N}\\sup~\\norm{y^{[n]}}\\le R(y_0,T)\n\\]\nwith $R(y_0,T)=e^{T}\\|y_0\\|$. The details are omitted. Note that the strong order of convergence is equal to $1$ since the system is driven by a single Wiener process ($m=1$, the commutative noise case condition is satisfied). This concludes the proof of Proposition~\\ref{thm-smb}.\n\\end{proof}\n\nWhen $\\sigma_3>0$, one can prove the following moment bound for the numerical solution:\n\\[\n\\E[\\|y^{[n]}\\|^2]\\le e^{(1+\\frac{\\sigma_3^2}{2})nh}\\E[\\|y^{[0]}\\|^2].\n\\]\nThis follows from the inequality\n\\begin{align*}\n\\E[\\|e^{\\sigma_3\\Delta W_3Y_{H_3}}y\\|^2]&=\\E\\bigl[(y_1+\\sigma_3\\Delta W_3y_2)^2+y_2^2+y_3^2\\bigr]=\\|y\\|^2+\\sigma_3^2h|y_2|^2\\\\\n&\\le (1+\\sigma_3^2h)\\|y\\|^2\n\\end{align*}\nand a recursion argument. A similar moment bound holds for the exact solution, however these moment bounds are not sufficient to prove a strong convergence result.\n\n\nThe objectives of this subsection are first to illustrate Proposition~\\ref{thm-smb}, and second to investigate the behaviour of the strong and weak errors when the condition $\\sigma_3=0$ is removed. Whether it is possible to prove strong and weak convergence estimates, or convergence in probability results, for this problem in the general case is left open for future works.\n\nWe first illustrate the convergence of the strong error. In this numerical experiment, the initial value is $y(0)=(1,2,3)$ and the final time is $T=1$. We consider the two cases where the system~\\eqref{smb} is driven by a single Wiener process: $(\\sigma_1,\\sigma_3)=(1,0)$ and $(\\sigma_1,\\sigma_3)=(0,1)$. Similar results would be obtained if the system was driven by two independent Wiener processes ($\\sigma_1=\\sigma_3=1$ for instance). The reference solution is computed using each scheme with time step size $h_{\\text{ref}}=2^{-16}$, and the schemes are applied with the range of time step sizes $h=2^{-5},\\ldots,2^{-13}$. The expectation is approximated averaging the error over $M_s=500$ independent Monte Carlo samples.\n\nLike in Figure~\\ref{fig:CasMaxBloc}, we compare the splitting integrator~\\eqref{smbI} with the standard Euler--Maruyama scheme, and the stochastic midpoint scheme from \\cite{Milstein2002a}. A truncation of the noise is used, see above for details. To be able to compare the results for different schemes, we use this truncation in all experiments where the implicit stochastic midpoint scheme is involved. \n\nFor the cases where the system is driven by a single Wiener process ($\\sigma_3=0$ or $\\sigma_1=0$), the results of the numerical experiment are presented in Figure~\\ref{fig:msMaxBloc}: we observe a strong order of convergence equal to $1$ for the proposed explicit stochastic Poisson integrator~\\eqref{smbI}. This confirms the result of Proposition~\\ref{thm-smb} when $\\sigma_3=0$. We also conjecture that the stochastic Poisson integrator~\\eqref{smbI} has strong order of convergence equal to $1$ when $\\sigma_1=0$.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics*[width=0.48\\textwidth,keepaspectratio]{newMsMaxBloc}\n\\includegraphics*[width=0.48\\textwidth,keepaspectratio]{rev-msMaxBloc}\n\\caption{Stochastic Maxwell--Bloch system with a single Wiener process: Strong errors of the Euler--Maruyama scheme ($\\times$), the midpoint scheme ($\\circ$), \nand the explicit stochastic Poisson integrator ($\\diamond$). Left: $(\\sigma_1,\\sigma_3)=(1,0)$. Right: $(\\sigma_1,\\sigma_3)=(0,1)$.}\n\\label{fig:msMaxBloc}\n\\end{figure}\n\n\nFor the case where the system is driven by two Wiener processes, the results of the numerical experiment are presented in Figure~\\ref{fig:msMaxBloc2}, with $\\sigma_1=\\sigma_3=1$.\nBased on the observed convergence behaviour, we conjecture that the strong order of the proposed integrator is $1\/2$.\nThis result is not covered by the theoretical analysis performed in this article.\n\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics*[width=0.48\\textwidth,keepaspectratio]{rev-msMaxBloc2}\n\\caption{Stochastic Maxwell--Bloch system with $(\\sigma_1,\\sigma_3)=(1,1)$: \nStrong errors of the Euler--Maruyama scheme ($\\times$), the midpoint scheme ($\\circ$), \nand the explicit stochastic Poisson integrator ($\\diamond$).}\n\\label{fig:msMaxBloc2}\n\\end{figure}\n\nWe now illustrate the weak convergence of the stochastic Poisson integrator~\\eqref{smbI}. For this numerical experiment, \nwe set $\\sigma_1=\\sigma_3=1$, the initial value is $y(0)=(1,2,3)$ and the final time is $T=1$. The reference solution is computed using each scheme with time step size $h_{\\text{ref}}=2^{-16}$, and the schemes are applied with the range of time step sizes $h=2^{-6},\\ldots,2^{-12}$. The expectation is approximated averaging the error over $M_s=10^9$ independent Monte Carlo samples. Finally, the test function is given by $\\phi(y)=\\sin(2\\pi y_1)+\\sin(2\\pi y_2)+\\sin(2\\pi y_3)$.\n\n\nThe results are presented in Figure~\\ref{fig:weakMaxBloc}. According to the observed rate of convergence, we conjecture that the weak order of the proposed integrator is $1$, but this result is not covered by the theoretical analysis performed in this article.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics*[width=0.48\\textwidth,keepaspectratio]{plotmatlabMB0rdre1}\n\\caption{Stochastic Maxwell--Bloch system: Weak errors of the explicit stochastic Poisson integrator.} \n\\label{fig:weakMaxBloc}\n\\end{figure}\n\nNumerical experiments illustrating the behaviour of the weak error when the system is driven by a single noise are not reported: indeed, as seen in Figure~\\ref{fig:msMaxBloc}, the stochastic Poisson integrator has strong order of convergence equal to $1$ if $\\sigma_3=0$ (rigorous result, Proposition~\\ref{thm-smb}) or if $\\sigma_1=0$ (conjecture). In those cases, the weak error behaves like the strong error and the rate of convergence is $1$.\n\n\nTo conclude this subsection, let us provide a numerical experiment using the scheme~\\eqref{eq:order2} of weak order $2$ presented in Remark~\\ref{rem:order2}. \nFor this numerical experiment, all the values of the parameters are the same as for Figure~\\ref{fig:weakMaxBloc}, except $\\sigma_1=\\sigma_3=10^{-3}$. The results are presented on Figure~\\ref{fig:weakMaxBloc2}. We observe that the weak convergence seems to be of order $2$ for the scheme~\\eqref{eq:order2}, but for small values of $h$ the error saturates due to the Monte Carlo approximation. This is illustrated on the right figure, which gives results for different values of the Monte Carlo sample size.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics*[width=0.48\\textwidth,keepaspectratio]{plotmatlabMB0rdre2a}\n\\includegraphics*[width=0.48\\textwidth,keepaspectratio]{plotmatlabMB0rdre2b}\n\\caption{Stochastic Maxwell--Bloch system: Weak errors for the weak order $2$ scheme~\\eqref{eq:order2}. Left: $10^9$ Monte Carlo samples. Right: $10^6$ to $10^9$ Monte Carlo samples.} \n\\label{fig:weakMaxBloc2}\n\\end{figure}\n\n\n\\subsubsection{Asymptotic preserving splitting scheme for the stochastic Maxwell--Bloch system}\n\nIn this subsection, we consider the multiscale version~\\eqref{prob_eps}, parametrized by $\\epsilon$, of the stochastic Maxwell--Bloch system. Based on the expression~\\eqref{smbI} for the stochastic Poisson integrator, applying the general asymptotic preserving scheme~\\eqref{APscheme} introduced in Section~\\ref{APsplit} gives the scheme\n\\begin{equation}\\label{smbIAP}\n\\left\\lbrace\n\\begin{aligned}\ny^{\\epsilon,[n]}&=\\exp(hY_{H_3})\\circ\\exp(hY_{H_1})\\circ\\exp(\\frac{\\sigma_3h\\xi_{3}^{\\epsilon,[n]}}{\\epsilon}Y_{\\widehat H_3})\\circ\\exp(\\frac{\\sigma_1h\\xi_{1}^{\\epsilon,[n]}}{\\epsilon}Y_{\\widehat H_1})(y^{\\epsilon,[n-1]}),\\\\\n\\xi_k^{\\epsilon,[n]}&=\\xi_{k}^{\\epsilon,[n-1]}-\\frac{h}{\\epsilon^2}\\xi_{k}^{\\epsilon,[n]}+\\frac{\\Delta_n W_k}{\\epsilon}=\\frac{1}{1+\\frac{h}{\\epsilon^2}}\\Bigl(\\xi_k^{\\epsilon,[n-1]}+\\frac{\\Delta_n W_k}{\\epsilon}\\Bigr),\\quad k=1,2.\n\\end{aligned}\n\\right.\n\\end{equation}\nThe initial values are $y^{\\epsilon,[0]}=y^{[0]}=y(0)$ and $\\xi_{k}^{\\epsilon,[0]}=0$, $k=1,2$.\n\nFirst, let us illustrate the qualitative behaviour of the scheme, for different values of $\\epsilon$. For this numerical experiment, \n$\\sigma_1=\\sigma_3=0.1$, the initial value is $y(0)=(1,2,3)$ and the final time is $T=1$. The time step size is equal to $h=10^{-3}$. In Figure~\\ref{fig:trajMBAPa}, we illustrate the preservation of the Casimir, up to an error of size $O(10^{-14})$, for the asymptotic preserving scheme applied with $\\epsilon= 1,0.1,0.001$ (left) and for the stochastic Poisson integrator~\\eqref{srbI}, formally $\\epsilon=0$ (right). In Figure~\\ref{fig:trajMBAPb}, we plot the evolution of the approximation of the trajectory $t_n\\mapsto y(t_n)=(y_1(t_n),y_2(t_n),y_3(t_n))$, for different values of $\\epsilon=1,0.1,0.001,0$. We observe that the trajectories are more regular when $\\epsilon$ is large and converge to the solution of the stochastic Poisson integrator~\\eqref{srbI} as $\\epsilon$ tends to $0$.\n \n\n\\begin{figure}[h]\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\centering\\includegraphics*[width=0.8\\textwidth,keepaspectratio]{revcasMaxBloc_APa}\n\\end{subfigure}\n~\n\\begin{subfigure}{.5\\textwidth}\n\\centering\\includegraphics*[width=0.8\\textwidth,keepaspectratio]{revcasMaxBloc_0a}\n\\end{subfigure}\n\\caption{Stochastic Maxwell--Bloch with a single noise: evolution of the Casimir using the asymptotic preserving scheme~\\eqref{smbIAP}. Left: $\\epsilon=1,0.1,0.001$. Right: $\\epsilon=0$.\n}\n\\label{fig:trajMBAPa}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\begin{subfigure}{.5\\textwidth}\n\\centering\\includegraphics*[width=0.8\\textwidth,keepaspectratio]{revcasMaxBloc_APb}\n\\end{subfigure}\n~\n\\begin{subfigure}{.5\\textwidth}\n\\centering\\includegraphics*[width=0.8\\textwidth,keepaspectratio]{revcasMaxBloc_APc}\n\\end{subfigure}\n\\newline\n\\begin{subfigure}{.5\\textwidth}\n\\centering\\includegraphics*[width=0.8\\textwidth,keepaspectratio]{revcasMaxBloc_APd}\n\\end{subfigure}\n~\n\\begin{subfigure}{.5\\textwidth}\n\\centering\\includegraphics*[width=0.8\\textwidth,keepaspectratio]{revcasMaxBloc_0b}\n\\end{subfigure}\n\\caption{Stochastic Maxwell--Bloch system: trajectories of the numerical solution using the asymptotic preserving scheme~\\eqref{smbIAP}. Top: left $\\epsilon=1$, right $\\epsilon=0.1$. Bottom: left $\\epsilon=0.001$, right $\\epsilon=0$.}\n\\label{fig:trajMBAPb}\n\\end{figure}\n\n\n\n\n\nFinally, the last experiment of this subsection illustrates the uniform accuracy property of the splitting scheme~\\eqref{smbIAP} with respect to the parameter $\\epsilon$ in the weak sense. For this numerical experiment, $\\sigma_1=\\sigma_3=0.1$, the initial value is $y(0)=(1,2,3)$ and the final time is $T=1$. The reference solution is computed using each scheme with time step size $h_{\\text{ref}}=2^{-16}$, and the schemes are applied with the range of time step sizes $h=2^{-6},\\ldots,2^{-12}$. The expectation is approximated averaging the error over $M_s=10^9$ independent Monte Carlo samples. Finally, the test function is given by $\\phi(y)=\\sin(2\\pi y_1)+\\sin(2\\pi y_2)+\\sin(2\\pi y_3)$. The parameter $\\epsilon$ takes the following values: $\\epsilon=10^{-2}, 10^{-3}, 10^{-4}, 10^{-5}$. The results are seen in Figure~\\ref{fig:weakMaxBlocAP}. We observe that the weak error seems to be bounded uniformly with respect to $\\epsilon$, with an order of convergence $1$. For a standard method such as the Euler--Maruyama scheme, the behaviour would be totally different: for fixed time step size $h$, the error is expected to be bounded away from $0$ when $\\epsilon$ goes to $0$. Based on this numerical experiment, we conjecture that the asymptotic preserving scheme is uniformly accurate.\n\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics*[width=0.48\\textwidth,keepaspectratio]{plotmatlabMBAP}\n\\caption{Stochastic Maxwell--Bloch: Weak errors of the asymptotic preserving scheme~\\eqref{smbIAP} \nfor $\\epsilon=10^{-2}, 10^{-3}, 10^{-4}, 10^{-5}$.} \n\\label{fig:weakMaxBlocAP}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Explicit stochastic Poisson integrators for the stochastic rigid body system}\\label{ssec:PRB}\n\n\nThis subsection presents explicit stochastic Poisson integrators for the stochastic rigid body system~\\eqref{srb} (Example~\\ref{expl-SRB}). We first give a detailed construction of the splitting scheme, which gives a stochastic Poisson integrator. We then illustrate its qualitative properties (preservation of the Casimir function) and strong and weak error estimates of the proposed scheme by numerical experiments. Finally, we illustrate the asymptotic preserving property (see Section~\\ref{APsplit}) for a multiscale version of the system.\n\nBelow we state and prove Proposition~\\ref{thm-srb}, which gives strong and weak rates of convergence of the proposed explicit scheme. This is a non-trivial result since the coefficients of the considered stochastic differential equation are not globally Lipschitz continuous.\n\n\n\\subsubsection{Presentation of the splitting scheme for the stochastic rigid body system}\n\n\nTo apply the strategy described in Section~\\ref{ssec:LP} and construct explicit stochastic Poisson integrators, we first follow the approach from~\\cite{MR1246065,MR2009376} for the deterministic rigid body system. The Hamiltonian function $H$ is split as $H=H_1+H_2+H_3$, with $H_j=\\frac12\\frac{y_j^2}{I_j}$ for $j=1,2,3$. Recall also that the Hamiltonian functions appearing in the stochastic part of the dynamics are given by $\\widehat H_j=\\frac12\\frac{y_j^2}{\\widehat I_j}$, for $j=1,2,3$.\n\nThe application of the general splitting integrator~\\eqref{slpI} for the stochastic rigid body system~\\eqref{srb} requires to compute the exact solutions of the deterministic subsystems\n\\[\n\\dot{y}_j=B(y_j)\\nabla H_j(y_j)\n\\]\nand of the stochastic subsystems\n\\[\n\\,\\mathrm{d} y_j=B(y_j)\\nabla \\widehat H_j(y_j)\\circ \\,\\mathrm{d} W_j(t).\n\\]\nAs explained for instance in~\\cite{MR1246065,MR2009376} for the deterministic system, it is straightforward to solve such subsystems. \nWe only provide the details when $j=1$. In that case, the deterministic subsystem is of the type\n$$\n\\begin{cases}\n\\dot y_1&=0 \\\\\n\\dot y_2&=y_1y_3\/I_1 \\\\\n\\dot y_3&=-y_1y_2\/I_1.\n\\end{cases}\n$$\nThe first equation yields that $y_1$ is constant, i.\\,e. $y_1(t)=y_1(0)$ for all $t\\ge 0$. As a consequence, $(y_2,y_3)$ is a solution of a linear ordinary differential equation.\n\nThe deterministic subsystem when $j=1$ thus admits the exact solution\n$$\n\\begin{pmatrix}y_1(t)\\\\y_2(t)\\\\y_3(t)\\end{pmatrix}=\n\\begin{pmatrix}1 & 0 & 0\\\\ 0 & \\cos(\\theta t) & \\sin(\\theta t)\\\\ 0 & -\\sin(\\theta t) & \\cos(\\theta t)\\end{pmatrix}y(0),\n$$\nwhere $\\theta=\\frac{y_1(0)}{I_1}$. Similarly, the stochastic subsystem when $j=1$ is written as\n$$\n\\begin{cases}\n\\,\\mathrm{d} y_1&=0\\\\\n\\,\\mathrm{d} y_2&= y_1y_3\/\\widehat I_1\\circ\\,\\,\\mathrm{d} W_1\\\\\n\\,\\mathrm{d} y_3&=-y_1y_2\/\\widehat I_1 \\circ\\,\\,\\mathrm{d} W_1\n\\end{cases}\n$$\nand it admits the exact solution\n$$\n\\begin{pmatrix}y_1(t)\\\\y_2(t)\\\\y_3(t)\\end{pmatrix}=\n\\begin{pmatrix}1 & 0 & 0\\\\ 0 & \\cos(\\theta W_1(t)) & \\sin(\\theta W_1(t))\\\\ 0 & -\\sin(\\theta W_1(t)) & \\cos(\\theta W_1(t))\\end{pmatrix}y(0).\n$$\nwhere $\\theta=\\frac{y_1(0)}{\\widehat I_1}$.\n\nThe solutions of the deterministic and stochastic subsystems when $j=2,3$ have similar expressions, which are not written here for brevity.\n\n\n\n\n\n\nFinally, setting $\\Phi_h^{\\text{det}}=\\exp(hY_{H_3})\\circ\\exp(hY_{H_2})\\circ\\exp(hY_{H_1})$ and $\\Phi_{\\Delta W}^{\\text{stoch}}=\\exp(\\Delta W_3Y_{\\widehat H_3})\\circ \n\\exp(\\Delta W_2Y_{\\widehat H_2})\\circ\\exp(\\Delta W_1Y_{\\widehat H_1})$, the general splitting integrator~\\eqref{slpI} applied to the stochastic rigid body system~\\eqref{srb} gives \n\\begin{align}\\label{srbI}\n\\Phi_h=\\Phi_h^{\\text{det}}\\circ\\Phi_{\\Delta W}^{\\text{stoch}}&=\\exp(hY_{H_3})\\circ\\exp(hY_{H_2})\\circ\\exp(hY_{H_1})\n\\nonumber\\\\\n&\\quad \\circ\\exp(\\Delta W_3Y_{\\widehat H_3})\\circ \\exp(\\Delta W_2Y_{\\widehat H_2})\\circ\\exp(\\Delta W_1Y_{\\widehat H_1}).\n\\end{align}\nOwing to Proposition~\\ref{propo:sPi}, the explicit splitting scheme~\\eqref{srbI} is a stochastic Poisson integrator. In particular, it preserves the Casimir function $C(y)=y_1^2+y_2^2+y_3^2$, which has compact level sets.\n\n\n\n\\subsubsection{Preservation of the Casimir of the stochastic rigid body system}\n\n\n\nLet us first illustrate the qualitative behaviour of the stochastic Poisson integrator~\\eqref{srbI} introduced above. In this numerical experiment, \nthe moments of inertia are $I=(2,1,2\/3)$, $\\widehat I=(1,2,3)$, the initial value is $y(0)=(\\cos(1.1),0,\\sin(1.1))$ and the final time is $T=20$.\nIn Figure~\\ref{fig:RB}, we compare the numerical solutions given \nby the classical Euler--Maruyama scheme (applied to the It\\^o formulation of the system), the stochastic midpoint scheme from~\\cite{Milstein2002a}, and the explicit splitting scheme~\\eqref{srbI}. The time step size is equal to $h=0.2$ ($T\/h=100$). A truncation of the noise is used for this experiment, \nsee above for details. As proved in Proposition~\\ref{propo:sPi}, we observe that the Casimir function $C(y)=y_1^2+y_2^2+y_3^2$ is preserved when using the stochastic Poisson integrator~\\eqref{srbI}. The Casimir function is also preserved when using the stochastic midpoint scheme: indeed, this integrator is known to preserve quadratic invariants, see~\\cite{MR3210739}.\n\nIn addition, a plot of the evolution of the Hamiltonian for the three schemes (middle figure), and of the trajectory on the sphere of the proposed splitting scheme (right figure) are presented in Figure~\\ref{fig:RB}. \n\n\\begin{figure}[h]\n\\centering\n\\includegraphics*[width=0.3\\textwidth,keepaspectratio]{trajSRBL2}\n\\includegraphics*[width=0.3\\textwidth,keepaspectratio]{trajSRBL1}\n\\includegraphics*[width=0.3\\textwidth,keepaspectratio]{trajSRBL3.pdf}\n\\caption{Stochastic rigid body system: Qualitative behaviour of the Euler--Maruyama scheme ($\\times$), the midpoint scheme ($\\circ$), \nand the explicit stochastic Poisson integrator ($\\diamond$). Left: preservation of the Casimir. Middle: evolution of the Hamiltonian. Right: trajectory on the sphere for the scheme~\\eqref{srbI}.}\n\\label{fig:RB}\n\\end{figure}\n\n\\subsubsection{Strong and weak convergence of the explicit stochastic Poisson integrator for the stochastic rigid body system}\n\n\nThe general Theorem~\\ref{thm-general} is applicable in the case of stochastic rigid body system since the Casimir function $C$ has compact level sets.\n\n\\begin{proposition}\\label{thm-srb}\nConsider a numerical discretisation of the stochastic rigid body system~\\eqref{srb} by the stochastic Poisson integrator~\\eqref{srbI}. \nThen, the strong order of convergence of this scheme is $1\/2$ and the weak order of convergence is $1$.\n\\end{proposition}\n\n\\begin{proof}\nThe stochastic Poisson system~\\eqref{srb} admits the Casimir function $y\\mapsto C(y)=y_1^2+y_2^2+y_3^2$, which has compact level sets. \nIt then suffices to apply the general convergence result, Theorem~\\ref{thm-general}, which \nconcludes the proof of Proposition~\\ref{thm-srb}.\n\\end{proof}\n\nLet us first illustrate the strong convergence result. We compare the behaviours of the three integrators introduced above: the Euler--Maruyama scheme, the stochastic midpoint scheme, \nand the explicit stochastic Poisson integrator~\\eqref{srbI}. Note that Proposition~\\ref{thm-srb} is valid only for the splitting scheme~\\eqref{srbI}. For this numerical experiment, the moments of inertia are $I=(2,1,2\/3)$, $\\widehat I=(1,2,3)$, the initial value is $y(0)=(\\cos(1.1),0,\\sin(1.1))$ and the final time is $T=1$. The reference solution is computed using each scheme with time step size $h_{\\text{ref}}=2^{-16}$, and the schemes are applied with the range of time step sizes $h=2^{-5},\\ldots,2^{-13}$. The expectation is approximated averaging the error over $M_s=500$ independent Monte Carlo samples.\n\n\nThe results are presented in Figure~\\ref{fig:msSRBL}: we observe a strong order of convergence equal to $1\/2$ for the proposed explicit stochastic Poisson integrator~\\eqref{srbI}, which confirms the result of Proposition~\\ref{thm-srb}.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics*[width=0.48\\textwidth,keepaspectratio]{msSRBL}\n\\caption{Stochastic rigid body system: Strong errors for the Euler--Maruyama scheme ($\\times$), the midpoint scheme ($\\circ$), and the explicit stochastic Poisson integrator ($\\diamond$).}\n\\label{fig:msSRBL}\n\\end{figure}\n\nWe now illustrate the weak convergence of the stochastic Poisson integrator~\\eqref{srbI}. For this numerical experiment, the moments of inertia are $I=\\hat{I}=(2,1,2\/3)$, the initial value is $y(0)=(\\cos(1.1),0,\\sin(1.1))$ and the final time is $T=1$. The reference solution is computed using each scheme with time step size $h_{\\text{ref}}=2^{-16}$, and the schemes are applied with the range of time step sizes $h=2^{-6},\\ldots,2^{-12}$. The expectation is approximated averaging the error over $M_s=10^9$ independent Monte Carlo samples. Finally, the test function is given by $\\phi(y)=\\sin(2\\pi y_1)+\\sin(2\\pi y_2)+\\sin(2\\pi y_3)$.\nThe results are presented in Figure~\\ref{fig:weakRGB}. We observe a weak order $1$ for the proposed explicit stochastic Poisson integrator~\\eqref{srbI}, which confirms the result of Proposition~\\ref{thm-srb}.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics*[width=0.48\\textwidth,keepaspectratio]{plotmatlabRGB0rdre1}\n\\caption{Stochastic rigid body system: Weak error for the explicit stochastic Poisson integrator~\\eqref{srbI}.}\n\\label{fig:weakRGB}\n\\end{figure}\n\nTo conclude this subsection, let us provide a numerical experiment using the scheme of weak order $2$ presented in Remark~\\ref{rem:order2}. To do so, we consider the following variant of the stochastic rigid body system~\\eqref{srb}:\n\\begin{align*\n\\,\\mathrm{d}\\begin{pmatrix}y_1\\\\y_2\\\\y_3\\end{pmatrix}&=\nB(y)\n\\left(\\nabla H(y)\\,\\,\\mathrm{d} t+\\sigma_1\\nabla \\widehat H_1(y)\\circ\\,\\,\\mathrm{d} W_1(t)\n+\\sigma_2\\nabla \\widehat H_2(y)\\circ\\,\\,\\mathrm{d} W_2(t)\\right.\\nonumber\\\\\n&\\quad+\\left.\\sigma_3\\nabla \\widehat H_3(y)\\circ\\,\\,\\mathrm{d} W_3(t) \\right),\n\\end{align*}\nwith nonnegative real numbers $\\sigma_1,\\sigma_2,\\sigma_3$. For this numerical experiment, all the values of the parameters are the same as for Figure~\\ref{fig:weakRGB}, except the values of the additional parameters $\\sigma_1,\\sigma_2,\\sigma_3$: one has either three Wiener processes with $(\\sigma_1,\\sigma_2,\\sigma_3)=(10^{-3},10^{-3},10^{-3})$ (left figure), or a single Wiener process, with three possible choices $(\\sigma_1,\\sigma_2,\\sigma_3)=(10^{-3},0,0)$, $(\\sigma_1,\\sigma_2,\\sigma_3)=(0,10^{-3},0)$ and $(\\sigma_1,\\sigma_2,\\sigma_3)=(0,0,10^{-3})$ (right figure). The results are presented in Figure~\\ref{fig:weakRGBorder2}. We observe that the weak convergence seems to be of order $2$ for the scheme~\\eqref{eq:order2}, however for small values of $h$ the error saturates due to the Monte Carlo approximation.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics*[width=0.48\\textwidth,keepaspectratio]{weakRGB2a}\n\\includegraphics*[width=0.48\\textwidth,keepaspectratio]{weakRGB2b}\n\\caption{Stochastic rigid body system: Weak error for the integrator~\\eqref{eq:order2}. Left: $(\\sigma_1,\\sigma_2,\\sigma_3)=(10^{-3},10^{-3},10^{-3})$. Right: $(\\sigma_1,\\sigma_2,\\sigma_3)=(10^{-3},0,0)$, $(\\sigma_1,\\sigma_2,\\sigma_3)=(0,10^{-3},0)$ and $(\\sigma_1,\\sigma_2,\\sigma_3)=(0,0,10^{-3})$.} \n\\label{fig:weakRGBorder2}\n\\end{figure}\n\n\\subsubsection{Asymptotic preserving splitting scheme for the stochastic rigid body system}\n\nIn this subsection, we consider the multiscale version~\\eqref{prob_eps}, parametrized by $\\epsilon$, of the stochastic rigid body system. Based on the expression~\\eqref{srbI} for the stochastic Poisson integrator, applying the general asymptotic preserving scheme~\\eqref{APscheme} introduced in Section~\\ref{APsplit} gives the scheme\n\\begin{equation}\\label{srbIAP}\n\\left\\lbrace\n\\begin{aligned}\ny^{\\epsilon,n}&=\\exp(hY_{H_3})\\circ\\exp(hY_{H_2})\\circ\\exp(hY_{H_1})\\\\\n&\\quad \\circ\\exp(\\frac{h\\xi_{3}^{\\epsilon,[n]}}{\\epsilon}Y_{\\widehat H_3})\\circ \\exp(\\frac{h\\xi_{2}^{\\epsilon,[n]}}{\\epsilon}Y_{\\widehat H_2})\\circ\\exp(\\frac{h\\xi_{1}^{\\epsilon,[n]}}{\\epsilon}Y_{\\widehat H_1})\\\\\n\\xi_k^{\\epsilon,[n]}&=\\xi_{k}^{\\epsilon,[n-1]}-\\frac{h}{\\epsilon^2}\\xi_{k}^{\\epsilon,[n]}+\\frac{\\Delta_n W_k}{\\epsilon}=\\frac{1}{1+\\frac{h}{\\epsilon^2}}\\Bigl(\\xi_k^{\\epsilon,[n-1]}+\\frac{\\Delta_n W_k}{\\epsilon}\\Bigr),\\quad k=1,2,3.\n\\end{aligned}\n\\right.\n\\end{equation}\nThe initial values are $y^{\\epsilon,[0]}=y^{[0]}=y(0)$ and $\\xi_{k}^{\\epsilon,[0]}=0$, $k=1,2,3$.\n\n\nFirst, let us illustrate the qualitative behaviour of the scheme~\\eqref{srbIAP}, for different values of $\\epsilon$. For this numerical experiment, the moments of inertia are $I=\\hat{I}=(2,1,2\/3)$, the initial value is $y(0)=(\\cos(1.1),0,\\sin(1.1))$ and the final time is $T=1$. The time step size is equal to $h=10^{-4}$. In Figure~\\ref{fig:trajSRBAPa}, we plot the evolution of the Hamiltonian (top) and the Casimir (bottom) for the asymptotic preserving scheme~\\eqref{srbIAP} applied with $\\epsilon= 1,0.1,0.001$ (left) and for the stochastic Poisson integrator~\\eqref{srbI}, formally $\\epsilon=0$ (right). We observe the preservation of the Casimir function. In Figure~\\ref{fig:trajSRBAPb}, we plot the evolution of the approximation of the trajectory $t_n\\mapsto y(t_n)=(y_1(t_n),y_2(t_n),y_3(t_n))$, for different values of $\\epsilon=1,0.1,0.001,0$. We observe that the trajectories are more regular when $\\epsilon$ is large and converge to the solution of the stochastic Poisson integrator~\\eqref{srbI} as $\\epsilon$ tends to $0$.\n\n\n\n\\begin{figure}[h]\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\centering\\includegraphics*[width=0.8\\textwidth,keepaspectratio]{trajSRBL1_AP}\n\\end{subfigure}\n~\n\\begin{subfigure}{.5\\textwidth}\n\\centering\\includegraphics*[width=0.8\\textwidth,keepaspectratio]{trajSRBL1_0}\n\\end{subfigure}\n\\newline\n\\begin{subfigure}{.5\\textwidth}\n\\centering\\includegraphics*[width=0.8\\textwidth,keepaspectratio]{trajSRBL2_AP}\n\\end{subfigure}\n~\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics*[width=0.8\\textwidth,keepaspectratio]{trajSRBL2_0}\n\\end{subfigure}\n\\caption{Stochastic rigid body system: evolution of the Hamiltonian (top) and the Casimir (bottom) of the numerical solution using the asymptotic preserving scheme~\\eqref{srbIAP}. Left: $\\epsilon=1,0.1,0.001$. Right: $\\epsilon=0$.}\n\\label{fig:trajSRBAPa}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics*[width=0.8\\textwidth,keepaspectratio]{trajSRBL3_APa}\n\\end{subfigure}\n~\n\\begin{subfigure}{.5\\textwidth}\n\\centering\\includegraphics*[width=0.8\\textwidth,keepaspectratio]{trajSRBL3_APb}\n\\end{subfigure}\n\n\\begin{subfigure}{.5\\textwidth}\n\\centering\\includegraphics*[width=0.8\\textwidth,keepaspectratio]{trajSRBL3_APc}\n\\end{subfigure}\n~\n\\begin{subfigure}{.5\\textwidth}\n\\centering\\includegraphics*[width=0.8\\textwidth,keepaspectratio]{trajSRBL3_0}\n\\end{subfigure}\n\\caption{Stochastic rigid body system: Trajectory of the numerical solutions using the asymptotic preserving scheme~\\eqref{srbIAP}. Top: $\\epsilon=1,0.1$. Bottom: $\\epsilon=0.001,0$.}\n\\label{fig:trajSRBAPb}\n\\end{figure}\n\n\nFinally, the last experiment of this subsection illustrates the uniform accuracy property of the splitting scheme~\\eqref{srbIAP} with respect to the parameter $\\epsilon$ in the weak sense. For this numerical experiment, the moments of inertia are $I=(2,1,2\/3)$ and $\\hat{I}=(20,10,20\/3)$, the initial value is $y(0)=(\\cos(1.1),0,\\sin(1.1))$ and the final time is $T=1$. The reference solution is computed using each scheme with time step size $h_{\\text{ref}}=2^{-16}$, and the schemes are applied with the range of time step sizes $h=2^{-6},\\ldots,2^{-12}$. The expectation is approximated averaging the error over $M_s=10^8$ independent Monte Carlo samples. The test function is given by $\\phi(y)=\\sin(2\\pi y_1)+\\sin(2\\pi y_2)+\\sin(2\\pi y_3)$. The parameter $\\epsilon$ takes the following values: $\\epsilon=10^{-2}, 10^{-3}, 10^{-4}, 10^{-5}$. The results are seen in Figure~\\ref{fig:plotmatlabRGBAP}. We observe that the weak error seems to be bounded uniformly with respect to $\\epsilon$, with an order of convergence $1$. For a standard method such as the Euler--Maruyama scheme, the behaviour would be totally different: for fixed time step size $h$, the error is expected to be bounded away from $0$ when $\\epsilon$ goes to $0$. Based on this numerical experiment, we conjecture that the asymptotic preserving scheme is uniformly accurate.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics*[width=0.48\\textwidth,keepaspectratio]{plotmatlabRGBAP}\n\\caption{Stochastic rigid body system: Weak errors of the asymptotic preserving scheme~\\eqref{srbIAP} for $\\epsilon=10^{-2}, 10^{-3}, 10^{-4}, 10^{-5}$.} \n\\label{fig:plotmatlabRGBAP}\n\\end{figure}\n\n\n\n\\subsection{Explicit stochastic Poisson integrators for the stochastic sine--Euler system}\\label{ssec:PSE} \n\nThe last example of a stochastic Lie--Poisson system studied in this work is the stochastic version of the sine--Euler equations~\\eqref{stochSE} introduced in Example~\\ref{expl-SE}. Like for the previous examples, an explicit stochastic Poisson integrator is designed using a splitting strategy. We both illustrate the qualitative behaviour of the proposed integrator (preservation of Casimir functions) and strong error estimates. Note that numerical experiments which would illustrate weak error estimates or the asymptotic preserving property are not reported for this example: indeed, the results would be similar to those presented for the two other examples above.\n\n\nRecall that the stochastic sine--Euler system~\\eqref{stochSE} is of the type\n\\begin{align*}\n\\,\\mathrm{d} \\omega&=B(\\omega)\\left(\\nabla H(\\omega)\\,\\,\\mathrm{d} t+\n\\sigma_{(1,0)}\\nabla \\widehat H_{(1,0)}(\\omega)\\circ\\,\\,\\mathrm{d} W_{(1,0)}(t)+\\sigma_{(1,1)} \\nabla \\widehat H_{(1,1)}(\\omega)\\circ\\,\\,\\mathrm{d} W_{(1,1)}(t) \\right.\\nonumber\\\\\n&\\quad\\left.+\\sigma_{(0,1)}\\nabla \\widehat H_{(0,1)}(\\omega)\\circ\\,\\,\\mathrm{d} W_{(0,1)}(t) \n+\\sigma_{(-1,1)}\\nabla \\widehat H_{(-1,1)}(\\omega)\\circ\\,\\,\\mathrm{d} W_{(-1,1)}(t)\\right),\n\\end{align*}\nThis is a stochastic Lie--Poisson system. In order to design a splitting integrator, note that the Hamiltonian function $H$ can be split as $H=H_{(1,0)}+H_{(1,1)}+H_{(0,1)}+H_{(-1,1)}$, with $H_{\\bf k}(\\omega)=\\widehat H_{\\bf k}(\\omega)=\\frac{\\omega_{\\bf k}\\omega_{\\bf k}^\\star}{|\\bf k|^2}$. \n\nLike for the other examples, the deterministic subsystems\n\\[\n\\dot{\\omega}_{\\bf k}=B(\\omega_{\\bf k})\\nabla H_{\\bf k}(\\omega_{\\bf k})\n\\]\nand the stochastic subsystems\n\\[\n\\,\\mathrm{d} \\omega_{\\bf k}=B(\\omega_{\\bf k})\\nabla \\widehat H_{\\bf k}(\\omega_{\\bf k})\\circ \\,\\mathrm{d} W_{\\bf k}(t)\n\\]\ncan be solved exactly: indeed, for each subsystem, the variable $\\omega_{\\bf k}$ is preserved and the three other variables evolve following a linear differential equation. We refer to~\\cite{MR1860719,MR1246065} for the explanation of this idea for the deterministic subsystems. The treatment of the stochastic subsystems is straightforward using the exact solution of the subsystems $\\dot{\\omega}_{\\bf k}=B(\\omega_{\\bf k})\\nabla \\widehat H_{\\bf k}(\\omega_{\\bf k})$. \nThe splitting scheme for the stochastic sine--Euler SDE \\eqref{stochSE} then reads \n\\begin{align}\\label{stochSEI}\n\\Phi_h=\\Phi_h^{\\text{det}}\\circ\\Phi_{\\Delta W}^{\\text{stoch}}&=\n\\exp(h\\Omega_{H_{(-1,1)}})\\circ\\exp(h\\Omega_{H_{(0,1)}})\\circ\\exp(h\\Omega_{H_{(1,1)}})\\circ\\exp(h\\Omega_{H_{(1,0)}})\\nonumber \\\\\n&\\quad\\circ\n\\exp(\\sigma_{(-1,1)}\\Delta W_{(-1,1)}\\Omega_{\\widehat H_{(-1,1)}})\\circ\\exp(\\sigma_{(0,1)}\\Delta W_{(0,1)}\\Omega_{\\widehat H_{(0,1)}})\\nonumber\\\\\n&\\quad\\circ \\exp(\\sigma_{(1,1)}\\Delta W_{(1,1)}\\Omega_{\\widehat H_{(1,1)}})\n\\circ\\exp(\\sigma_{(1,0)}\\Delta W_{(1,0)}\\Omega_{\\widehat H_{(1,0)}}),\n\\end{align}\nwhere we denote by $\\Omega_{H_{\\bf k}}$, resp. by $\\Omega_{\\widehat H_{\\bf k}}$, the exact flow of the ODE subsystem, resp. SDE subsystem, with index ${\\bf k}\\in\\{(1,0),(1,1),(0,1),(-1,1)\\}$.\n\n\\subsubsection{Preservation of Casimir functions for the stochastic sine--Euler system}\n\nRecall that the stochastic sine--Euler system admits two Casimir functions $C_1$ and $C_2$ given in Example~\\ref{expl-SE}. Owing to Proposition~\\ref{propo:sPi}, the numerical scheme~\\eqref{stochSEI} is a stochastic Poisson integrator, in particular it preserves the two Casimir functions $C_1$ and $C_2$. \nWe numerically illustrate this property in Figure~\\ref{fig:trajSE}, where one sample of the numerical solution is computed with the time step size $h=0.02$.\nThe initial value is \n$\\omega(0)=(\\omega_{(1,0)}(0),\\omega_{(1,1)}(0),\\omega_{(0,1)}(0),\\omega_{(-1,1)}(0))=(0.1+0.3i, 0.2+0.3i, 0.3+0.2i, 0.4+0.1i)$. In addition, $\\sigma_k=1$ for all $k$ in this experiment. Figure~\\ref{fig:trajSE} confirms that the two Casimir functions are preserved by the proposed integrator~\\eqref{stochSEI}.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics*[width=0.48\\textwidth,keepaspectratio]{trajSE}\n\\caption{Stochastic sine--Euler system: Preservation of the two Casimir functions by the explicit stochastic Poisson integrator~\\eqref{stochSEI}.}\n\\label{fig:trajSE}\n\\end{figure}\n\n\\subsubsection{Strong convergence of the explicit stochastic Poisson integrator for the stochastic sine--Euler system}\n\n\n\n\nLike for the stochastic rigid body system, we have the following convergence result due to the preservation of the Casimir function $C_1$. \n\\begin{proposition}\\label{thm-se}\nConsider a numerical discretisation of the stochastic sine--Euler system~\\eqref{stochSE} by the explicit stochastic Poisson integrator~\\eqref{srbI}. \nThen, the strong order of convergence of this scheme is $1\/2$, and the weak order of convergence is $1$. If the system is driven by a single Wiener process, the strong order of convergence is $1$.\n\\end{proposition}\n\nAs already explained, the convergence result above is not trivial since the scheme is explicit: since the coefficients of the equations are not globally Lipschitz continuous, the explicit Euler--Maruyama scheme does not converge in the strong sense.\n\n\\begin{proof}\nThe stochastic Poisson system~\\eqref{stochSE} admits the Casimir function $\\omega\\mapsto C_1(\\omega)=|\\omega_1|^2+\\ldots+|\\omega_4|^2$, which has compact level sets. \nIt then suffices to apply the general convergence result, Theorem~\\ref{thm-general}, which \nconcludes the proof of Proposition~\\ref{thm-se}.\n\\end{proof}\n\n\nWe now numerically illustrate the strong rate of convergence of the proposed integrator~\\eqref{stochSEI}\nwhen applied to the SDE \\eqref{stochSE}. The final time is $T=1$ and the initial value is \n\n$\\omega(0)=(\\omega_{(1,0)}(0),\\omega_{(1,1)}(0),\\omega_{(0,1)}(0),\\omega_{(-1,1)}(0))=(0.1+0.3i, 0.2+0.3i, 0.3+0.2i, 0.4+0.1i)$. The reference solution is computed using the proposed scheme with time step size $h_{\\text{ref}}=2^{-15}$, and the scheme is applied with the range of time step sizes $h=2^{-5},\\ldots,2^{-13}$. The expectation is approximated averaging the error over $M_s=500$ independent Monte Carlo samples. The results are presented in Figure~\\ref{fig:msSE}. On the left, $\\sigma_1=1$ and $\\sigma_j=0$, for $j=2,\\ldots,4$: we observe an order of convergence equal to $1$, which confirms the result in Proposition~\\ref{thm-se} when the system is driven by a single Wiener process. On the right, $\\sigma_j=1$, for $j=1,\\ldots,4$, which means that the system is driven by four independent Wiener processes. We observe an order of convergence equal to $1\/2$, which confirms the result in Proposition~\\ref{thm-se}.\n\n\n \n\n\\begin{figure}[h]\n\\centering\n\\includegraphics*[width=0.48\\textwidth,keepaspectratio]{msSE1}\n\\includegraphics*[width=0.48\\textwidth,keepaspectratio]{msSE}\n\\caption{Stochastic sine--Euler system: Strong errors of the explicit stochastic Poisson integrator~\\eqref{stochSEI}. Left: single noise. Right: multiple noise.}\n\\label{fig:msSE}\n\\end{figure}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n \\label{sec:introduction}}\n\n \\IEEEPARstart{D}{eep} convolutional neural networks (CNNs) are dominating in most visual recognition problems and applications, including semantic segmentation \\cite{FCN}, action recognition \\cite{simonyan2014two}\n and object detection \\cite{girshick2014rich}, among many others. When full supervision is available, CNNs can achieve outstanding performances, but this type of supervision may not be available in a breadth of applications. In semantic segmentation, for instance, full supervision involves annotating all the pixels in each training image. The problem is further amplified when such annotations require expert knowledge or involves volumetric data, as is the case in medical imaging \\cite{Litjens2017}. Therefore, the supervision of semantic segmentation with partial or weak labels, for example, scribbles \\cite{scribblesup,ncloss:cvpr18,tang2018regularized}, image tags \\cite{pathak2015constrained,papandreou2015weakly}, bounding boxes \\cite{deepcut} or points \\cite{Bearman2016}, has received significant research efforts in the last years.\n\n Imposing prior knowledge on the network's prediction via some unsupervised loss is a well-established technique in semi-supervised learning \\cite{weston2012deep,goodfellow2016deep}. Such a prior acts as a regularizer that leverages unlabeled data with domain-specific knowledge. For instance, in semantic segmentation, several recent works showed that adding loss terms such as dense conditional random fields (CRFs) \\cite{tang2018regularized,Marin2019CVPR}, graph clustering \\cite{ncloss:cvpr18} or priors on the sizes of the target regions \\cite{kervadec2019constrained, Zhou2019ICCV, Kervadec2019MICCAI} can\n achieve outstanding performances with only fractions of full supervision labels. However, imposing hard inequality or equality constraints on the output of deep CNNs is still in a nascent stage, and only a few recent works have focused on the subject \\cite{pathak2015constrained,Marquez-Neila2017,Ravi2018,kervadec2019constrained}.\n\n \\subsection{Problem formulation}\n We consider a general class of semi- or weakly-supervised semantic segmentation problems, where {\\em global inequality constraints} are enforced on the network's output. \n Consider a training image $I: \\Omega \\subset \\mathbb{R}^2 \\rightarrow \\mathbb{R}$, with $\\Omega_{\\cal L} \\subset \\Omega$ a set of labeled pixels, which corresponds to a fraction of the pixels in image domain $\\Omega$. For $K$ classes, let $\\mathbf{y}_p = (y_p^1, \\dots y_p^K) \\in \\{ 0,1 \\}^K$ denotes the ground-truth label of pixel $p \\in \\Omega_{\\cal L}$, and $S_{\\boldsymbol{\\theta}} \\in [0, 1]^{|\\Omega| \\times K}$ a standard K-way softmax probability output, with $\\boldsymbol{\\theta}$ the network's parameters.\n In matrix $S_{\\boldsymbol{\\theta}}$, each row $\\boldsymbol{s}_{p,\\boldsymbol{\\theta}} = (s_{p,\\boldsymbol{\\theta}}^1, \\dots, s_{p,\\boldsymbol{\\theta}}^K) \\in [0,1]^K$ corresponds to the predictions for a pixel $p$ in $\\Omega$, which can be either unlabeled or labeled.\n We focus on constrained problems of the following general form:\n \\begin{align}\n \\label{Constrained-CNN}\n &\\min_{\\boldsymbol{\\theta}} \\, {\\cal E}(\\boldsymbol{\\theta}) \\nonumber \\\\\n &\\mbox{s.t.} \\, f_i(S_{\\boldsymbol{\\theta}}) \\leq 0, \\quad i=1, \\dots N\n \\end{align}\n where ${\\cal E}(\\boldsymbol{\\theta})$ is some standard loss for labeled pixels $p \\in \\Omega_{\\cal L}$, e.g., the\n cross entropy\\footnote{We give the cross entropy as an example but our framework is not restricted to a specific form of loss for the set of labeled points.}:\n ${\\cal E}(\\boldsymbol{\\theta}) = -\\sum_{p \\in \\Omega_{\\cal L}} \\sum_{k} y_p^k \\log(s_{p,\\boldsymbol{\\theta}}^k)$.\n Inequality constraints of the general form in \\eqref{Constrained-CNN} can embed very useful prior knowledge on the network's predictions for unlabeled\n pixels. Assume, for instance, that we have prior knowledge about the size of the target region (i.e., class) $k$. Such a knowledge can be in the form of lower or upper bounds on region size, which is common in medical image segmentation problems \\cite{kervadec2019constrained,Niethammer2013,Gorelick2013}. In this case, one can impose constraints of the form\n $f_i(S_{\\boldsymbol{\\theta}}) = \\sum_{p \\in \\Omega} s_{p,\\boldsymbol{\\theta}}^k - a$, with $a$ denoting an upper bound on the size of region $k$. The same type of constraints can impose image-tag priors,\n a form of weak supervision enforcing whether a target region is present or absent in a given training image, as in multiple instance learning (MIL)\n scenarios \\cite{pathak2015constrained,kervadec2019constrained}. For instance, constraint of the form $f_i(S_{\\boldsymbol{\\theta}}) = 1- \\sum_{p \\in \\Omega} s_{p,\\boldsymbol{\\theta}}^k$ forces class $k$ to be present in a given training image.\n \n\n\\subsection{Challenges of constrained CNN optimization}\n \nAs pointed out in several recent studies \\cite{pathak2015constrained,Marquez-Neila2017,kervadec2019constrained}, imposing hard constraints on deep CNNs involving millions of trainable parameters is challenging. This is the case of problem \\eqref{Constrained-CNN}, even when the constraints are convex with respect to the outputs of the network. In optimization, a standard way to handle constraints is to solve the Lagrangian primal and dual problems in an alternating scheme \\cite{Boyd2004}. For \\eqref{Constrained-CNN}, this corresponds to alternating the optimization of a CNN for the primal with stochastic optimization, e.g., SGD, and projected gradient-ascent iterates for the dual. However, despite the clear benefits of imposing global constraints on CNNs, such a standard Lagrangian-dual optimization is mostly avoided in modern deep networks. As discussed recently in \\cite{pathak2015constrained, Marquez-Neila2017,Ravi2018}, this might be explained by the computational complexity and stability\/convergence issues caused by alternating between stochastic optimization and dual updates\/projections.\n\nIn standard Lagrangian-dual optimization, an unconstrained problem needs to be solved after each iterative dual step. This is not feasible for deep CNNs, however, as it would require re-training the network at each step. To avoid this problem, Pathak et al. \\cite{pathak2015constrained} introduced a latent distribution, and minimized a KL divergence so that the CNN output matches this distribution as closely as possible. Since the network's output is not directly coupled with constraints, its parameters can be optimized using standard techniques like SGD. While this strategy enabled adding inequality constraints in weakly supervised segmentation, it is limited to linear constraints. Moreover, the work in \\cite{Marquez-Neila2017} imposed hard equality constraints on 3D human pose estimation. To alleviate the ensuing computational complexity, they used a Kyrlov sub-space approach, limiting the solver to a randomly selected subset of constraints within each iteration. Therefore, constraints that are satisfied at one iteration may not be satisfied at the next, which might explain the negative results obtained in \\cite{Marquez-Neila2017}. In general, updating the network parameters and dual variables in an alternating fashion leads to a higher computational complexity than solving a loss function directly.\n\nAnother important difficulty in Lagrangian optimization is the interplay between stochastic optimization (e.g., SGD) for the primal and the iterates\/projections for the dual. Basic gradient methods have well-known issues with deep networks, e.g., they are sensitive to the learning rate and prone to weak local minima. Therefore, the dual part in Lagrangian optimization might obstruct the practical and theoretical benefits of stochastic optimization (e.g., speed and strong generalization performance), which are widely established for unconstrained deep network losses \\cite{Hardt2016}. More importantly, solving the primal and dual separately may lead to instability during training or slow convergence, as shown recently in \\cite{kervadec2019constrained}.\n\n \\subsection{Penalty approaches}\n In the context of deep networks, ``hard'' inequality or equality constraints are typically handled in a ``soft'' manner by augmenting the loss with a {\\em penalty}\n function \\cite{He2017,Jia2017,kervadec2019constrained}. Such a penalty approach is a simple alternative to Lagrangian optimization, and is well-known in the general context of constrained optimization;\n see \\cite{Bertsekas1995}, Chapter 4. In general, penalty-based methods approximate a constrained minimization problem with an unconstrained one by adding a term (penalty) ${\\cal P}(f_i(S_{\\boldsymbol{\\theta}}))$, which increases when constraint $f_i(S_{\\boldsymbol{\\theta}}) \\leq 0$ is violated. By definition, a penalty ${\\cal P}$ is a non-negative, continuous and differentiable function, which verifies: ${\\cal P}(f_i(S_{\\boldsymbol{\\theta}})) = 0$ if and only if constraint $f_i(S_{\\boldsymbol{\\theta}}) \\leq 0$ is satisfied. In semantic segmentation \\cite{kervadec2019constrained} and, more generally, in deep learning \\cite{He2017}, it is common to use a quadratic penalty for imposing an inequality constraint: ${\\cal P}(f_i(S_{\\boldsymbol{\\theta}})) = [f_i(S_{\\boldsymbol{\\theta}})]_+^2$, where $[x]_+ = \\max (0,x)$ denotes the rectifier function. Fig. \\ref{fig:logBarrier} depicts different penalty functions. Penalties are convenient for deep networks because they remove the requirement for explicit Lagrangian-dual optimization. The inequality constraints are fully handled within stochastic optimization, as\n in standard unconstrained losses, avoiding gradient ascent iterates\/projections over the dual variables and reducing the computational load for training \\cite{kervadec2019constrained}. However, this simplicity of\n penalty methods comes at a price. In fact, it is well known that penalties do not guarantee constraint satisfaction and require careful and {\\em ad hoc} tuning of the relative importance (or weight)\n of each penalty term in the overall function. More importantly, in the case of several competing constraints, penalties do not act as {\\em barriers} at the boundary of the feasible set (i.e., a satisfied constraint yields a null penalty and null gradient). As a result, a subset of constraints that are satisfied at one iteration may not be satisfied at the next. Take the case of two competing constraints $f_1$ and $f_2$ at the current iteration (assuming gradient-descent optimization), and suppose that $f_1$ is satisfied but $f_2$ is not. The gradient of a penalty $\\cal P$ w.r.t the term of satisfied constraint $f_1$ is null, and the the penalty approach will focus solely on satisfying $f_2$. Therefore, due to a null gradient, there is nothing that prevents satisfied constraint $f_1$ from being violated. This could lead to oscillations between competing constraints during iterations, making the training unstable (we will give examples in the experiments).\n \n \n Lagrangian optimization can deal with these difficulties, and has several well-known theoretical\n and practical advantages over penalty methods \\cite{Fletcher1987,Gill1981}: it finds automatically the optimal weights of the constraints, acts as a barrier for satisfied constraints and guarantees constraint satisfaction when feasible solutions exist.\n Unfortunately, as pointed out recently in \\cite{Marquez-Neila2017,kervadec2019constrained}, these advantages of Lagrangian optimization do not materialize in practice in the context of deep CNNs. Apart from the computational-feasibility aspects, which the recent works in \\cite{Marquez-Neila2017,pathak2015constrained} address to some extent with approximations, the performances of Lagrangian optimization are, surprisingly, below those obtained with simple, much less computationally intensive penalties \\cite{Marquez-Neila2017,kervadec2019constrained}. This is, for instance, the case of the recent weakly supervised CNN semantic segmentation results in \\cite{kervadec2019constrained}, which showed that a simple quadratic-penalty formulation of inequality constraints outperforms substantially the Lagrangian method in \\cite{pathak2015constrained}. Also, the authors of \\cite{Marquez-Neila2017} reported surprising results in the context of 3D human pose estimation. In their case, replacing the equality constraints with simple quadratic penalties yielded better results\n than Lagrangian optimization.\n\n \\subsection{Contributions}\n \n Interior-point and log-barrier methods can approximate Lagrangian optimization by starting from a feasible solution and solving unconstrained problems, while completely avoiding explicit dual steps and projections. Unfortunately, despite their well-established advantages over penalties, such standard log-barriers were not used before in deep CNNs because finding a feasible set of initial network parameters is not trivial, and is itself a challenging constrained-CNN problem. We propose {\\em log-barrier extensions}, which approximate Lagrangian optimization of constrained-CNN problems with a sequence of unconstrained losses, without the need for an initial feasible set of network parameters. Furthermore, we provide a new theoretical result, which shows that the proposed extensions yield a duality-gap bound. This generalizes the standard duality-gap result of log-barriers, yielding sub-optimality certificates for feasible solutions in the case of convex losses. While sub-optimality is not guaranteed for non-convex problems, our result shows that log-barrier extensions are a principled way to approximate Lagrangian optimization for constrained CNNs via {\\em implicit} dual variables. Our approach addresses the well-known limitations of penalty methods and, at the same time, removes the explicit dual updates of Lagrangian optimization. We report comprehensive experiments showing that our formulation outperforms various penalty-based methods for constrained CNNs, both in terms of accuracy and training stability.\n\n \\section{Background on Lagrangian-dual optimization and the standard log-barrier}\n \\label{sec:log-barrier}\n\n This section reviews both standard Lagrangian-dual optimization and the log-barrier method for constrained problems \\cite{Boyd2004}. We also present basic concepts of duality theory, namely the {\\em duality gap} and $\\epsilon${-suboptimality}, which will be needed when introducing our log-barrier extension and the corresponding duality-gap bound. We also discuss the limitations of standard constrained optimization methods in the context of deep CNNs.\n\n {\\em Lagrangian-dual optimization:} Let us first examine standard Lagrangian optimization for problem \\eqref{Constrained-CNN}:\n \\begin{equation}\n \\label{Lagrangian}\n {\\cal L}(S_{\\boldsymbol{\\theta}}, \\boldsymbol{\\lambda}) = {\\cal E}(\\boldsymbol{\\theta}) + \\sum_{i=1}^{N} \\lambda_i f_i (S_{\\boldsymbol{\\theta}})\n \\end{equation}\n where $\\boldsymbol{\\lambda}=(\\lambda_1, \\dots, \\lambda_N)$ is the dual variable (or Lagrange-multiplier) vector, with $\\lambda_i$ the multiplier associated with\n constraint $f_i (S_{\\boldsymbol{\\theta}}) \\leq 0$. The dual function is the minimum value of Lagrangian \\eqref{Lagrangian} over $\\boldsymbol{\\theta}$: $g(\\boldsymbol{\\lambda}) = \\min_{\\boldsymbol{\\theta}} {\\cal L}(S_{\\boldsymbol{\\theta}}, \\boldsymbol{\\lambda})$. A dual feasible $\\boldsymbol{\\lambda}\\geq0$ yields a lower bound on the optimal value of constrained problem \\eqref{Constrained-CNN}, which we denote ${\\cal E}^*$: $g(\\boldsymbol{\\lambda}) \\leq {\\cal E}^*$. This important inequality can be easily verified, even when the problem \\eqref{Constrained-CNN} is not convex; see \\cite{Boyd2004}, p. 216. It follows that a dual feasible $\\boldsymbol{\\lambda}$ gives a sub-optimality certificate for a given feasible point $\\boldsymbol{\\theta}$, without knowing the exact value of ${\\cal E}^*$:\n ${\\cal E}(\\boldsymbol{\\theta}) - {\\cal E}^* \\leq {\\cal E}(\\boldsymbol{\\theta}) - g(\\boldsymbol{\\lambda})$. Nonnegative quantity ${\\cal E}(\\boldsymbol{\\theta}) - g(\\boldsymbol{\\lambda})$ is the duality gap for primal-dual pair $(\\boldsymbol{\\theta}, \\boldsymbol{\\lambda})$. If we manage to find a feasible primal-dual pair $(\\boldsymbol{\\theta}, \\boldsymbol{\\lambda})$ such that the duality gap is less or equal than a certain $\\epsilon$, then primal feasible $\\boldsymbol{\\theta}$ is $\\epsilon${-suboptimal}.\n \\begin{Defn}\n A primal feasible point $\\boldsymbol{\\theta}$ is $\\epsilon${\\em-suboptimal} when it verifies: ${\\cal E}(\\boldsymbol{\\theta}) - {\\cal E}^* \\leq \\epsilon$.\n \\end{Defn}\n This provides a non-heuristic stopping criterion for Lagrangian optimization, which alternates two iterative steps, one primal and one dual, each decreasing\n the duality gap until a given accuracy $\\epsilon$ is attained\\footnote{Strong duality should hold if we want to achieve arbitrarily small tolerance $\\epsilon$.\n Of course, strong duality does not hold in the case of CNNs as the primal problem is not convex.}. In the context of CNNs \\cite{pathak2015constrained}, the primal step minimizes the Lagrangian w.r.t. $\\boldsymbol{\\theta}$, which corresponds to training a deep network with stochastic optimization, e.g., SGD: $\\argmin_{\\boldsymbol{\\theta}} {\\cal L}(S_{\\boldsymbol{\\theta}}, \\boldsymbol{\\lambda})$. The dual step is a constrained maximization of the dual function\\footnote{Notice that the dual function is always concave as it is the minimum of a family of affine functions, even when the original (or primal) problem is not convex, as is the case for CNNs.} via projected gradient ascent: $\\max_{\\boldsymbol{\\lambda}} g(\\boldsymbol{\\lambda}) \\, \\, \\mbox{s.t} \\, \\, \\boldsymbol{\\lambda}\\geq0$. As mentioned before, direct use of Lagrangian optimization for deep CNNs increases computational complexity and can lead to instability or poor convergence due to the interplay between stochastic optimization for the primal and the iterates\/projections for the dual.\n Our work approximates Lagrangian-dual optimization with a sequence of unconstrained log-barrier-extension losses, in which the dual\n variables are {\\em implicit}, avoiding explicit dual iterates\/projections. Let us first review the standard log-barrier method.\n\n {\\em The standard log-barrier:} The log-barrier method is widely used for inequality-constrained optimization, and belongs to the\n family of {\\em interior-point} techniques \\cite{Boyd2004}. To solve our constrained CNN problem \\eqref{Constrained-CNN} with this\n method, we need to find a strictly feasible set of network parameters $\\boldsymbol{\\theta}$ as a starting point, which can then be used in an\n unconstrained problem via the standard log-barrier function. In the general context of optimization, log-barrier methods proceed in two steps.\n The first, often called {\\em phase I} \\cite{Boyd2004}, computes a feasible point by Lagrangian minimization of a constrained problem, which in the\n case of \\eqref{Constrained-CNN} is:\n \\begin{align}\n \\label{Feasible_point_computation}\n &\\min_{x, \\boldsymbol{\\theta}} \\ x \\ \\ \\nonumber \\\\ &\\mathrm{s.t.} \\ f_i(S_{\\boldsymbol{\\theta}}) \\leq x, \\, i=1, \\dots N\n \\end{align}\n For deep CNNs with millions of parameters, Lagrangian optimization of problem \\eqref{Feasible_point_computation} has the same difficulties as with the\n initial constrained problem in \\eqref{Constrained-CNN}. To find a feasible set of network parameters, one needs to alternate CNN training and projected gradient ascent for the dual variables. This might explain why such interior-point methods, despite their substantial impact in optimization \\cite{Boyd2004}, are mostly overlooked in modern deep networks\\footnote{Interior-point methods were investigated for artificial neural networks before the deep learning era \\cite{Trafalis1997}.}, as is generally the case for other Lagrangian-dual optimization methods.\n\n The second step, often referred to as {\\em phase II}, approximates \\eqref{Constrained-CNN} as an unconstrained problem:\n \\begin{equation}\n \\label{log-barrier-problem}\n \\min_{\\boldsymbol{\\theta}} \\, {\\cal E}(\\boldsymbol{\\theta}) + \\sum_{i=1}^{N} \\psi_t \\left ( f_i(S_{\\boldsymbol{\\theta}}) \\right )\n \\end{equation}\n where $\\psi_t$ is the log-barrier function: $\\psi_t (z) = -\\frac{1}{t} \\log (-z)$. When $t \\rightarrow + \\infty$, this convex, continuous and\n twice-differentiable function approaches a hard indicator for the constraints: $H(z) = 0$ if $z \\leq 0$ and $+\\infty$ otherwise; see Fig. \\ref{fig:logBarrier} (a) for an illustration.\n The domain of the function is the set of feasible points. The higher $t$, the better the quality of the approximation. This suggest that large $t$ yields a good approximation of the initial constrained problem in \\eqref{Constrained-CNN}. This is, indeed, confirmed with the following standard duality-gap result for the log-barrier method \\cite{Boyd2004}, which shows that optimizing \\eqref{log-barrier-problem} yields a solution that is $N\/t$-suboptimal.\n\n \\begin{Prop}\n \\label{prop:duality-gap-log-barrier}\n Let $\\boldsymbol{\\theta}^*$ be the feasible solution of unconstrained problem \\eqref{log-barrier-problem} and $\\boldsymbol{\\lambda}^*=(\\lambda^*_1, \\dots, \\lambda^*_N)$, with $\\lambda^*_i= - 1\/(t f_i(S_{\\boldsymbol{\\theta}}))$. Then, the duality gap associated with primal feasible $\\boldsymbol{\\theta}^*$ and dual feasible $\\boldsymbol{\\lambda}^*$ for the initial constrained problem in \\eqref{Constrained-CNN} is: \n \\[{\\cal E}(\\boldsymbol{\\theta}^*) - g(\\boldsymbol{\\lambda}^*) = N\/t \\]\n \\end{Prop}\n\n {\\em Proof}: The proof can be found in \\cite{Boyd2004}, p. 566. \\qed\n\n \\begin{figure*}[h!]\n \\centering\n \\begin{subfigure}[b]{0.33\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/log_barrier}\n \\caption{Standard log-barrier}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.33\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/extended_log_barrier}\n \\caption{Proposed log-barrier extension}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.33\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/quadra_relu}\n \\caption{Examples of penalty functions}\n \\end{subfigure}\n \\caption{A graphical illustration of the standard log-barrier in (a), the proposed log-barrier extension in (b) and several examples of \n penalty functions in (c). The solid curves in colors illustrate several values of $t$ for functions $\\psi_t (z)$, $\\tilde \\psi_t (z)$ \n and the ReLU penalty given by $f_t(z) = \\max(0, tz)$. The dashed lines depict both barrier and penalty functions when $t \\rightarrow + \\infty$.}\n \\label{fig:logBarrier}\n \\end{figure*}\n\n An important implication that follows immediately from proposition \\eqref{prop:duality-gap-log-barrier} is that a feasible solution of approximation \\eqref{log-barrier-problem} is $N\/t$-suboptimal: ${\\cal E}(\\boldsymbol{\\theta}^*) - {\\cal E}^* \\leq N\/t$. This suggests a simple way for solving the initial constrained problem with a guaranteed $\\epsilon${-suboptimality}: We simply choose large $t = N\/\\epsilon$ and solve unconstrained problem \\eqref{log-barrier-problem}. However, for large $t$, the log-barrier function is difficult to minimize because its gradient varies rapidly near the boundary of the feasible set. In practice, log-barrier methods solve a sequence of problems of the form \\eqref{log-barrier-problem}\n with an increasing value $t$. The solution of a problem is used as a starting point for the next, until a specified $\\epsilon$-suboptimality is reached.\n\n \\section{Log-barrier extensions}\n \\label{sec:extensions}\n\n We propose the following unconstrained loss for approximating Lagrangian optimization of constrained problem \\eqref{Constrained-CNN}:\n \\begin{equation}\n \\label{log-barrier-extension-problem}\n \\min_{\\boldsymbol{\\theta}} \\, {\\cal E}(\\boldsymbol{\\theta}) + \\sum_{i=1}^{N} \\tilde{\\psi}_t \\left ( f_i(S_{\\boldsymbol{\\theta}}) \\right )\n \\end{equation}\n where $\\tilde{\\psi}_t$ is our {\\em log-barrier extension}, which is convex, continuous and twice-differentiable:\n \\begin{equation}\n \\label{log-barrier extension}\n \\tilde{\\psi}_{t}(z) =\n \\begin{cases}\n -\\frac{1}{t} \\log (-z) & \\text{if } z \\leq -\\frac{1}{t^2} \\\\\n tz - \\frac{1}{t} \\log (\\frac{1}{t^2}) + \\frac{1}{t} & \\text{otherwise}\n \\end{cases}\n \\end{equation}\n Similarly to the standard log-barrier, when $t \\rightarrow + \\infty$, our extension \\eqref{log-barrier extension} can be viewed a smooth approximation of hard indicator function $H$; see Fig. \\ref{fig:logBarrier} (b). However, a very important difference is that the domain of our extension $\\tilde{\\psi}_{t}$ is not restricted to feasible points $\\boldsymbol{\\theta}$.\n Therefore, our approximation \\eqref{log-barrier-extension-problem} removes completely the requirement for\n explicit Lagrangian-dual optimization for finding a feasible set of network parameters. In our case, the inequality constraints are fully handled within stochastic optimization, as in standard unconstrained losses, avoiding completely gradient ascent iterates and projections over {\\em explicit} dual variables. As we will see in the experiments, our formulation yields better results in terms of accuracy and stability than the recent penalty constrained CNN method in \\cite{kervadec2019constrained}.\n\n In our approximation in \\eqref{log-barrier-extension-problem}, the Lagrangian dual variables for the initial inequality-constrained problem of \\eqref{Constrained-CNN}\n are {\\em implicit}. We prove the following duality-gap bound, which yields sub-optimality certificates for feasible solutions of our approximation\n in \\eqref{log-barrier-extension-problem}. Our result\\footnote{Our result applies to the general context of convex optimization. In deep CNNs, of course, a feasible solution of our approximation may not be unique and is not guaranteed to be a global optimum as ${\\cal E}$ and the constraints are not convex.} can be viewed as an extension of the standard result in proposition \\ref{prop:duality-gap-log-barrier}, which expresses the duality-gap as a function of $t$ for the log-barrier function.\n \\begin{Prop}\n \\label{prop:duality-gap-log-barrier-extension}\n Let $\\boldsymbol{\\theta}^*$ be the solution of problem \\eqref{log-barrier-extension-problem} and $\\boldsymbol{\\lambda}^*=(\\lambda^*_1, \\dots, \\lambda^*_N)$ the corresponding vector of\n implicit Lagrangian dual variables given by:\n \\begin{equation}\n \\label{implicit-dual-barrier-extension-initial}\n \\lambda^*_i = \\begin{cases}\n -\\frac{1}{t f_i(S_{\\boldsymbol{\\theta}^*})} & \\text{if } f_i(S_{\\boldsymbol{\\theta}^*}) \\leq -\\frac{1}{t^2} \\\\\n t & \\text{otherwise}\n \\end{cases} .\n \\end{equation}\n Then, we have the following upper bound on the duality gap associated with primal $\\boldsymbol{\\theta}^*$ and implicit dual feasible $\\boldsymbol{\\lambda}^*$ for the initial inequality-constrained problem \\eqref{Constrained-CNN}:\n \\[ {\\cal E}(\\boldsymbol{\\theta}^*) - g(\\boldsymbol{\\lambda}^*) \\leq N\/t \\]\n \\end{Prop}\n {\\em Proof:} We give a detailed proof of Prop. \\ref{prop:duality-gap-log-barrier-extension} in the Appendix. \\qed\n\n From proposition \\ref{prop:duality-gap-log-barrier-extension}, the following important fact follows immediately: If the solution $\\boldsymbol{\\theta}^*$ that we obtain from\n unconstrained problem \\eqref{log-barrier-extension-problem} is feasible and global, then it is $N\/t$-suboptimal for constrained problem \\eqref{Constrained-CNN}:\n ${\\cal E}(\\boldsymbol{\\theta}^*) - {\\cal E}^* \\leq N\/t$.\n\n Finally, we arrive to our constrained CNN learning algorithm, which is fully based on SGD. Similarly to the standard log-barrier algorithm, we use a varying parameter $t$. We optimize a sequence of losses of the form \\eqref{log-barrier-extension-problem} and increase gradually the value $t$ by a factor $\\mu$. The network parameters obtained for the current $t$ and epoch are used as a starting point for the next $t$ and epoch. Steps of the proposed constrained CNN learning algorithm are detailed in Algorithm \\ref{algo:logbarrier}.\n \n {\\em On the fundamental differences between our log-barrier extensions and penalties:} It is important to note that making the log-barrier extension gradually harder by increasing parameter $t$ is not a crucial difference between our formulation and standard penalties. In fact, we can also make standard penalties stricter with a similar gradual increase of $t$ (we will present experiments on this in the next section). The fundamental differences are: \n \\begin{itemize}\n \\item A penalty does not act as a barrier near the boundary of the feasible set, i.e., a satisfied constraint yields null penalty and gradient. Therefore, at a given gradient update, there is nothing that prevents a satisfied constraint from being violated, causing oscillations between competing constraints and making the training unstable; See Figs. \\ref{fig:learning_curves} (a) and (d) for an illustration. On the contrary, the strictly positive gradient of our log-barrier extension gets higher when a satisfied constraint approaches violation during optimization, pushing it back towards the feasible set.\n \\vspace{0.25cm}\n \\item Another fundamental difference is that the derivatives of our log-barrier extensions yield the implicit dual variables in Eq. \\eqref{implicit-dual-barrier-extension-initial}, with sub-optimality and duality-gap guarantees, which is not the case for penalties. Therefore, our log-barrier extension mimics Lagrangian optimization, but with implicit rather than explicit \n dual variables. The detailed proof of Prop. \\ref{prop:duality-gap-log-barrier-extension} in the Appendix clarifies how the $\\lambda^*_i$'s in Eq. \\eqref{implicit-dual-barrier-extension-initial} can be viewed as implicit dual variables. \n \\end{itemize}\n \n \\begin{algorithm\n \\BlankLine\n \\textbf{Given} initial non-strictly feasible $\\theta, t:=t^{(0)} > 0, \\mu > 1$ \\\\\n \\BlankLine\n \\textbf{Repeat} (for \\textit{n} epochs) \\\\\n \\BlankLine\n \\qquad Compute $\\theta^*(t)$ by minimizing \\eqref{log-barrier-extension-problem} via SGD, starting at $\\theta$ \\\\\n \\qquad Update $\\theta. \\rightarrow \\quad \\theta:=\\theta^*(t)$ \\\\\n \\qquad Increase $t. \\rightarrow \\quad t :=\\mu t$\\\\\n \\BlankLine\n \\caption{Log-barrier-extension training for constrained CNNs}\n \\label{algo:logbarrier}\n \\end{algorithm}\n\n \\section{Experiments}\n \n Both the proposed log-barrier extension and the standard quadratic penalty in \\cite{kervadec2019constrained}, i.e., ${\\cal P}(f_i(S_{\\boldsymbol{\\theta}})) = [f_i(S_{\\boldsymbol{\\theta}})]_+^2$, are compatible with any differentiable function $f_i(S_{\\boldsymbol{\\theta}})$, including non-linear and fractional terms, as in Eqs. \\eqref{eq:soft_size} and \\eqref{eq:soft_centroid} introduced further in the paper. However, we hypothesize that our log-barrier extension is better for handling the interplay between multiple competing constraints. To validate this hypothesis, we compare both strategies on the joint optimization of two segmentation constraints related to region size and centroid. Furthermore, we included comparisons with a ReLU penalty, which parameterized by a gradually increasing $t$, in a way exactly similar to our log-barrier extension: $f_t(z) = \\max(0, tz)$.\n Also, the ReLU penalty has a linear behaviour on the non-feasible set, similarly to our log-barrier extension. Therefore, a comparison with a gradually stricter ReLU on multiple constraints will confirm the importance of the barrier effect of our log-barrier extensions on the feasible set, which we discussed in the previous section. We will not compare directly to the Lagrangian optimization in \\cite{pathak2015constrained} because the recent weakly supervised CNN segmentation results in \\cite{kervadec2019constrained} showed that a quadratic-penalty formulation of inequality constraints outperforms substantially the Lagrangian method in \\cite{pathak2015constrained}, in terms of performance and training stability. Therefore, we focus our comparisons on several penalties. \n\n{\\em Region-size constraint:}\n For a partially-labeled image, we define the size (or volume) of a segmentation for class $k$ as the sum of its softmax predictions over the image domain:\n \\begin{equation}\n \\label{eq:soft_size}\n \\mathcal{V}_{\\boldsymbol{\\theta}}^k = \\sum_{p \\in \\Omega} s^k_{p,\\boldsymbol{\\theta}}\n \\end{equation}\n Notice that we use the softmax predictions to approximate size because using the discrete binary values after thresholding would not be differentiable. In practice, we can make network predictions $s^k_{p,\\boldsymbol{\\theta}}$ close to binary values using a large enough temperature parameter in the softmax function. We use the following inequality constraints on region size:\n $0.9 \\tau_{\\mathcal{V}^k} \\leq \\mathcal{V}_{\\boldsymbol{\\theta}}^k \\leq 1.1 \\tau_{\\mathcal{V}^k}$,\n where, similarly to the experiments in \\cite{kervadec2019constrained}, $\\tau_{\\mathcal{V}^k} = \\sum_{p \\in \\Omega} y_p^k$ is determined from the ground truth of each image\\footnote{Since our focus is on evaluating and comparing constrained-optimization methods, we did not add additional processes to estimate the bounds without complete annotations of each training image. One can, however, use a single fully annotated training image to obtain such bounds or use an auxiliary learning to estimate region attributes such as size \\cite{Kervadec2019MICCAI}.}. \n\n {\\em Region-centroid constraints:}\n The centroid of the predicted region can be computed as a weighted average of the pixel coordinates:\n \\begin{equation}\n \\label{eq:soft_centroid}\n \\mathcal{C}_{\\boldsymbol{\\theta}}^k = \\frac{\\sum_{p\\in\\Omega} s_{p,\\boldsymbol{\\theta}}^k c_p}{\\sum_{p \\in \\Omega} s_{p,\\boldsymbol{\\theta}}^k},\n \\end{equation}\n where $c_p \\in \\mathbb{N}^2$ are the pixel coordinates on a 2D grid.\n We constrain the position of the centroid in a box around the ground-truth centroid:\n $ \\tau_{\\mathcal{C}^k} - 20 \\leq \\mathcal{C}_{\\boldsymbol{\\theta}}^k \\leq \\tau_{\\mathcal{C}^k} + 20$,\n with $\\tau_{\\mathcal{C}^k} = \\frac{\\sum_{p\\in\\Omega} y_p^k c_p}{\\sum_{p \\in \\Omega} y_p^k}$ corresponding to the bound values associated\n with each image.\n\n \\subsection{Datasets and evaluation metrics}\n \\label{ssec:dataset}\n Our evaluations and comparisons were performed on three different segmentation scenarios using synthetic, medical and color images. The data sets used in each of these problems are detailed below. \n\n \\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=.85\\textwidth]{Images\/prostate_labels}\n \\caption{Full mask of the prostate (\\textit{top}) and the generated point annotations (\\textit{bottom}). The background is depicted in red, and the foreground in green. No color means that no information is provided about the pixel class. The figures are best viewed in colors.}\n \\label{fig:prostate_labels}\n \\end{figure}\n\n \\begin{itemize}\n \\item \\textbf{Synthetic images:}\n We generated a synthetic dataset composed of 1100 images with two different circles of the same size but different colors, and different levels of Gaussian noise added to the whole image. The target region is the darker circle. From these images, 1000 were employed for training and 100 for validation; See Fig. \\ref{fig:cherry_toy}, first column, for illustration. No pixel annotation is used during training ($\\Omega_{\\mathcal{L}} = \\{ \\emptyset \\}$). The objective of this simple dataset is to compare our log-barrier extension with several penalties \n \n in three different constraint settings: 1) only size, 2) only centroid, and 3) both constraints. For the first two settings, we expect both methods to fail since the corresponding segmentation problems are under-determined (e.g., size is not sufficient to determine which circle is the correct one). On the other hand, the third setting provides enough information to segment the right circle, and the main challenge here is the interplay between the two different constraints.\n\n \\item \\textbf{Medical images:} We use the PROMISE12 \\cite{litjens2014evaluation} dataset, which was made available for the MICCAI 2012 prostate segmentation challenge. Magnetic Resonance (MR) images (T2-weighted) of 50 patients with various diseases were acquired at different locations with several MRI vendors and scanning protocols. We hold 10 patients for validation and use the rest for training. As in \\cite{kervadec2019constrained}, we use partial cross entropy for the weakly supervised setting, with weak labels derived from the ground truth by placing random dots inside the object of interest (Fig. \\ref{fig:prostate_labels}).\n \n For this data set, we impose constraints on the size of the target region, as in \\cite{kervadec2019constrained}.\n\n \\item \\textbf{Color images:} We also evaluate our method on the Semantic Boundaries Dataset (SBD), which can be seen as a scaling of the original PascalVOC segmentation benchmark. We employed the 20 semantic categories of PascalVOC. This dataset contains 11318 fully annotated images, divided into 8498 for training and 2820 for testing. We obtained the scribble annotations from the public repository of ScribbleSup \\cite{scribblesup}, and took the intersection between both datasets for our experiments. Thus, a total of 8829 images were used for training, and 1449 for validation.\n \\end{itemize}\n\n For the synthetic and PROMISE12 datasets, we resort to the common \\text{Dice index} (DSC) = $\\frac{2|S \\bigcap Y|}{|S|+|Y|}$ to evaluate the performance of tested methods. For PascalVOC, we follow most studies on this dataset and use the mean Intersection over Union (mIoU) metric.\n\n \\begin{figure*}[ht!]\n \n \n\n \n\n \n\n \n \n \n \\centering\n \\begin{subfigure}[b]{0.33\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/toy_tra_dice}\n \\caption{Synthetic dataset training dice.}\n \\label{fig:toy_training_dice_suplt}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.33\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/prostate_tra_dice}\n \\caption{PROMISE12 dataset training dice.}\n \\label{fig:prostate_training_dice_suplt}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.33\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/train_mIoU_plot}\n \\caption{PascalVOC training mIoU.}\n \\label{fig:pascal_training_miou_suplt}\n \\end{subfigure}\n \\\\\n \\begin{subfigure}[b]{0.33\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/toy_val_dice}\n \\caption{Synthetic dataset validation dice}\n \\label{fig:toy_validation_dice}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.33\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/prostate_val_batch_dice}\n \\caption{PROMISE12 dataset validation dice}\n \\label{fig:prostate_validation_dice}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.33\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/val_mIoU_plot}\n \\caption{PascalVOC validation mIoU}\n \\label{fig:pascal_validatio_miou}\n \\end{subfigure}\n\n \\caption{Learning curves on the three data sets, for both the training and validation sets. Best viewed in colors.}\n \\label{fig:learning_curves}\n \\end{figure*}\n\n \\subsection{Training and implementation details}\n Since the three datasets have very different characteristics, we considered a specific network architecture and training strategy for each of them.\n\n For the dataset of synthetic images, we used the ENet network \\cite{paszke2016enet}, as it has shown a good trade-off between accuracy and inference time. The network was trained from scratch using the Adam optimizer and a batch size of 1. The initial learning rate was set to $5 \\times 10\\textsuperscript{-4}$ and decreased by half if validation performance did not improve for 20 epochs. Softmax temperature value was set to 5. To segment the prostate, we used the same settings as in \\cite{kervadec2019constrained}, reporting their results for the penalty-based baselines.\n \n For PascalVOC, we used a Pytorch implementation of the FCN8s model \\cite{FCN}, built upon a pre-trained VGG16 \\cite{simonyan2014very} from the Torchvision model zoo\\footnote{\\url{https:\/\/pytorch.org\/docs\/stable\/torchvision\/models.html}}. We trained this network with a batch size of 1 and a constant learning rate of 10$\\textsuperscript{-5}$ over time. Regarding the weights of the penalties and log-barrier terms we investigated several values and we obtained the best performances with 10$\\textsuperscript{-4}$ and 10$\\textsuperscript{-2}$, respectively.\n\n For all tasks, we set to 5 the initial $t$ value of our extended log-barrier (Algorithm \\ref{algo:logbarrier}). We increased it by a factor of $\\mu=1.1$ after each epoch. This strategy relaxes constraints in the first epochs so that the network can focus on learning from images, and then gradually makes these constraints harder as optimization progresses. The same scheduling is used for the ReLU baseline.\n Experiments on the toy example and PascalVOC were implemented in Python 3.7 with PyTorch 1.0.1 \\cite{paszke2017automatic}, whereas\n we followed the same specifications as \\cite{kervadec2019constrained} for the prostate experiments, employing Python 3.6 with PyTorch 1.0. All the experiments were carried out on a server equipped with a NVIDIA Titan V. The code is publicly available\\footnote{\\url{https:\/\/github.com\/LIVIAETS\/extended_logbarrier}}.\n\n \\subsection{Results}\n \\label{sssec:results}\n The following sections report the experimental results and comparisons on the three datasets introduced in Sec. \\ref{ssec:dataset}.\n\n \\subsubsection{Synthetic images}\n Results on the synthetic example for our log-barrier extensions and the penalty-based approaches are reported in Table \\ref{tab:numbers_toy}. As expected, constraining the size only is not sufficient to locate the correct circle (2nd, 5th and 8th columns in Fig. \\ref{fig:cherry_toy}), which explains the very low DSC values in the second column of Table \\ref{tab:numbers_toy}. However, we observe that the different optimization strategies lead to very different solutions, with sparse unconnected dots for the penalty-based methods and a continuous shape for our log-barrier extension. This difference could be due to the high gradients of the penalty method in the first iterations, which strongly biases the network toward bright pixels. Constraining the centroid only locates the target region, but misses the correct boundaries (3rd, 6th and 9th columns in Fig. \\ref{fig:cherry_toy}). Notice that for the centroid constraint, which corresponds to a difficult fractional term, our log-barrier yielded a much better performance than the penalties, with about $12\\%$ improvement over the quadratic penalty and $6\\%$ improvement over the paramterized ReLU penalty. \n The most interesting scenario is when both the size and centroid are constrained. In Fig. \\ref{fig:toy_validation_dice}, we can see that the penalty-based methods, both quadratic and ReLU with parameter $t$, are unstable during training, and have significantly lower performances than log-barrier extension; see the last column of Table \\ref{tab:numbers_toy}. This demonstrates the barrier's effectiveness in preventing predictions from going out of bounds (Fig. \\ref{fig:logBarrier}), thereby making optimization more stable. Notice that the gradually harder ReLU has the same performances and unstable behaviour as the quadratic penalty, which confirms the importance of the barrier effect when dealing with multiple constraints.\n \n \n \n \n\n \\begin{table}[ht!]\n \\centering\n \\footnotesize\n \\begin{tabular}{l|c|c||c}\n \\toprule\n & \\multicolumn{3}{c}{\\textbf{Constraints}} \\\\\n \\hline\n \\textbf{Method} & \\textbf{Size} & \\textbf{Centroid} & \\textbf{Size \\& Centroid} \\\\\n \\hline\n ReLU penalty (w\/ param. $t$) & 0.0087 & 0.3770 & \\textbf{0.8731} \\\\\n Quadratic penalty \\cite{kervadec2019constrained} & 0.0601 & 0.3197 & \\textbf{0.8514} \\\\\n Log-barrier extensions & 0.0018 & 0.4347 & \\textbf{0.9574} \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Validation Dice on the synthetic example for several optimization methods and constraint settings. }\n \\label{tab:numbers_toy}\n \\end{table}\n\n \\begin{figure*}[h!]\n \n \\centering\n \\includegraphics[width=\\textwidth]{Images\/toy_cherrypick}\n \\caption{Results of constrained CNNs on a synthetic example using the penalty-based methods and our log-barrier extension. The background is depicted in red, and foreground in green. Best viewed in colors. }\n \\label{fig:cherry_toy}\n \\end{figure*}\n\n \\subsubsection{PROMISE12 dataset}\n Quantitative results on the prostate segmentation task are reported in Table \\ref{tab:numbers_prostate_pascal} (\\textit{left} column). Without prior information, i.e., using only the scribbles, the trained model completely fails to achieve a satisfactory performance, with a mean Dice coefficient of 0.032. It can be observed that integrating the target size during training significantly improves performance. While constraining the predicted segmentation with a penalty-based method \\cite{kervadec2019constrained} achieves a DSC value of nearly 0.83, imposing the constraints with our log-barrier extension increased the performance by an additional 2\\%. The use of log-barrier extensions to constrain the CNN predictions reduces the gap towards the fully supervised model, with only 4\\% of difference between both.\n\n \n \n\n \\begin{table}[ht!]\n \\footnotesize\n \\centering\n \\begin{tabular}{l|c|c}\n \\toprule\n & \\multicolumn{2}{c}{\\textbf{Dataset}} \\\\\n \\hline\n \\textbf{Method} & PROMISE12 (DSC) & VOC2012 (mIoU) \\\\\n \\hline\\hline\n Partial cross-entropy & 0.032 (0.015) & 48.48 (14.88) \\\\\n \\quad w\/ quadratic penalty \\cite{kervadec2019constrained} & 0.830 (0.057) & 52.22 (14.94) \\\\\n \\quad w\/ extended log-barrier & \\textbf{0.852} (0.038) & \\textbf{53.40} (14.62) \\\\\n \\hline\n Full supervision & 0.891 (0.032) & 59.87 (16.94) \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Mean and standard deviation on the validation set of the PROMISE12 and PascalVOC datasets when networks are trained with several levels of supervision.}\n \\label{tab:numbers_prostate_pascal}\n \\end{table}\n\n \\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=1\\textwidth]{Images\/prostate_cherrypick}\n \\caption{Results on the PROMISE12 dataset. Images are cropped for visualization purposes. The background is depicted in red, and foreground in green. The figures are best viewed in colors.}\n \\label{fig:cherry_prostate}\n \\end{figure}\n\n \\subsubsection{PascalVOC}\n Table \\ref{tab:numbers_prostate_pascal} (\\textit{right} column) compares the results of our log-barrier extension to those obtained with\n quadratic penalties and partial cross-entropy (i.e., using scribble annotations only), using region-size constraints. \n For reference, we also include the full-supervision results, which serve as an upper bound. From the results, we can see that the quadratic-penalty constraints improve the performances over learning from scribble annotations only by approximately 4\\%, in terms of mIoU. With the proposed log-barrier extension, the mIoU increases up to 53.4\\%, yielding a 1.2\\% of improvement over the quadratic penalty and only a 6.4\\% of gap in comparison to full supervision. The visual results in Fig. \\ref{fig:pascal_cherry} show how the proposed framework for constraining CNN training helps reducing over-segmentation and false positives.\n\n \\begin{figure*}[ht!]\n \n \n \\centering\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/example_3\/img_2007_005331}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/example_3\/gt_2007_005331}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/example_3\/partial_2007_005331}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/example_3\/penalty_2007_005331}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/example_3\/log_barrier_2007_005331}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/example_3\/fs_2007_005331}\n \\end{subfigure}\n\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/example_2\/img_2011_000479}\n \n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/example_2\/gt_2011_000479}\n \n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/example_2\/partial_2011_000479}\n \n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/example_2\/penalty_2011_000479}\n \n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/example_2\/log_barrier_2011_000479}\n \n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/example_2\/fs_2011_000479}\n \n \\end{subfigure}\n\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_000661\/JPEGImages}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_000661\/SegmentationClass}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_000661\/PARTIAL}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_000661\/PENALTY}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_000661\/LOG_BARRIER}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_000661\/FS}\n \\end{subfigure}\n\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_001457\/JPEGImages}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_001457\/SegmentationClass}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_001457\/PARTIAL}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_001457\/PENALTY}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_001457\/LOG_BARRIER}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_001457\/FS}\n \\end{subfigure}\n\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_001526\/JPEGImages}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_001526\/SegmentationClass}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_001526\/PARTIAL}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_001526\/PENALTY}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_001526\/LOG_BARRIER}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_001526\/FS}\n \\end{subfigure}\n\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_003101\/JPEGImages}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_003101\/SegmentationClass}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_003101\/PARTIAL}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_003101\/PENALTY}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_003101\/LOG_BARRIER}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_003101\/FS}\n \\end{subfigure}\n\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_002618\/JPEGImages}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_002618\/SegmentationClass}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_002618\/PARTIAL}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_002618\/PENALTY}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_002618\/LOG_BARRIER}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/2007_002618\/FS}\n \\end{subfigure}\n\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/example_2\/img_2011_000479}\n \\caption*{{\\scriptsize Input\\\\image}}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/example_2\/gt_2011_000479}\n \\caption*{{\\scriptsize GT\\\\~}}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/example_2\/partial_2011_000479}\n \\caption*{{\\scriptsize Scribbles\\\\only}}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/example_2\/penalty_2011_000479}\n \\caption*{{\\scriptsize w\/\\\\penalty}}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/example_2\/log_barrier_2011_000479}\n \\caption*{{\\scriptsize w\/ ext.\\\\log-barrier}}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.158\\textwidth}\n \\includegraphics[width=\\textwidth]{Images\/pascal_cherry\/example_2\/fs_2011_000479}\n \\caption*{{\\scriptsize Full\\\\supervision}}\n \\end{subfigure}\n\n \\caption{Several visual examples on PascalVOC validation set. Best viewed in colors}\n \\label{fig:pascal_cherry}\n \\end{figure*}\n\n\n\n \\section{Conclusion}\n \n \n \n \n \n \n \n We proposed log-barrier extensions, which approximate Lagrangian optimization of constrained-CNN problems with a sequence of unconstrained losses. Our formulation relaxes the need for an initial feasible solution, unlike standard interior-point and log-barrier methods. This makes it convenient for deep networks. We also provided an upper bound on the duality gap for our proposed extensions, thereby generalizing the duality-gap result of standard log-barriers and showing that our formulation has dual variables that mimic implicitly (without dual projections\/steps) Lagrangian optimization. \n Therefore, our implicit Lagrangian formulation can be fully handled with SGD, the workhorse of deep networks. \n We reported comprehensive constrained-CNN experiments, showing that log-barrier extensions outperform several types of penalties, in terms of accuracy and training stability.\n \n While we evaluated our approach in the context of weakly supervised segmentation, log-barrier extensions can be useful in breadth of problems in vision and learning, where constraints occur naturally. This include, for instance, adversarial robustness \\cite{Rony2019CVPR}, stabilizing the training of GANs \\cite{GulrajaniNIPS17}, domain adaptation for segmentation \\cite{curriculumDA2019}, pose-constrained image generation \\cite{poseconstraintsneurips2018}, \n 3D human pose estimation \\cite{Marquez-Neila2017}, and deep reinforcement learning \\cite{He2017}. To our knowledge, for these problems, among others in the context of deep networks, constraints (either equality\\footnote{Note that our framework can also be used for equality constraints as one can transform an equality constraint into two inequality constraints.} or inequality) are typically handled with basic penalties. Therefore, it will be interesting to investigate log-barrier extensions for these problems. \n \n Since our focus was on evaluating and comparing constrained-optimization methods, we defined the constraints from prior knowledge about a few segmentation-region attributes (size and centroid). Such image-level attributes can also be learned from data using auxiliary regression networks, which could be useful in semi-supervision \\cite{Kervadec2019MICCAI} and domain-adaptation \\cite{curriculumDA2019} scenarios. It would interesting to investigate log-barrier extensions in such scenarios and with a much broader set of constraints, for instance, region connectivity or compactness, inter-region relationships and higher-order shape moments\\footnote{Size and centroid are $0^{\\mbox{th}}$ and $1^{\\mbox{st}}$ shape moments.}.\n\n \\appendices\n \\section{Proof of Proposition \\ref{prop:duality-gap-log-barrier-extension}}\n In this section, we provide a detailed proof for the duality-gap bound in Prop. \\ref{prop:duality-gap-log-barrier-extension}.\n Recall our unconstrained approximation for inequality-constrained CNNs:\n \\begin{equation}\n \\label{log-barrier-extension-problem-supp}\n \\min_{\\boldsymbol{\\theta}} \\, {\\cal E}(\\boldsymbol{\\theta}) + \\sum_{i=1}^{N} \\tilde{\\psi}_t \\left ( f_i(S_{\\boldsymbol{\\theta}}) \\right )\n \\end{equation}\n where $\\tilde{\\psi}_t$ is our log-barrier extension, with $t$ strictly positive. Let $\\boldsymbol{\\theta}^*$ be the solution of problem \\eqref{log-barrier-extension-problem-supp}\n and $\\lll^*=(\\lambda^*_1, \\dots, \\lambda^*_N)$ the corresponding vector of implicit dual variables given by:\n \\begin{equation}\n \\label{implicit-dual-barrier-extension}\n \\lambda^*_i = \\begin{cases}\n -\\frac{1}{t f_i(S_{\\boldsymbol{\\theta}^*})} & \\text{if } f_i(S_{\\boldsymbol{\\theta}^*}) \\leq -\\frac{1}{t^2} \\\\\n t & \\text{otherwise}\n \\end{cases}\n \\end{equation}\n We assume that $\\boldsymbol{\\theta}^*$ verifies approximately\\footnote{When optimizing unconstrained loss via stochastic gradient descent (SGD), there is no guarantee that the obtained solution verifies exactly the optimality conditions.} the optimality condition for a minimum of \\eqref{log-barrier-extension-problem-supp}:\n \\begin{equation}\n \\label{optimality-condition-supp}\n \\nabla {\\cal E}(\\boldsymbol{\\theta}^*) + \\sum_{i=1}^{N} \\tilde{\\psi'}_t \\left ( f_i(S_{\\boldsymbol{\\theta}^*}) \\right ) \\nabla f_i(S_{\\boldsymbol{\\theta}^*}) \\approx 0\n \\end{equation}\n It is easy to verify that each dual variable $\\lambda^*_i$ corresponds to the derivative of the log-barrier extension at $f_i(S_{\\boldsymbol{\\theta}^*})$: \n \\begin{equation}\n \\lambda^*_i = \\tilde{\\psi'}_t \\left ( f_i(S_{\\boldsymbol{\\theta}^*}) \\right ) \\nonumber \n \\end{equation}\n Therefore, Eq. \\eqref{optimality-condition-supp} means that\n $\\boldsymbol{\\theta}^*$ verifies approximately the optimality condition for the Lagrangian corresponding to the original inequality-constrained problem in Eq. \\eqref{Constrained-CNN} when $\\lll = \\lll^*$:\n \\begin{equation}\n \\nabla {\\cal E}(\\boldsymbol{\\theta}^*) + \\sum_{i=1}^{N} \\lambda^*_i \\nabla f_i(S_{\\boldsymbol{\\theta}^*}) \\approx 0\n \\end{equation}\n It is also easy to check that the implicit dual variables defined in \\eqref{implicit-dual-barrier-extension} corresponds to a feasible dual, i.e., $\\lll^*>0$\n element-wise. Therefore, the dual function evaluated at $\\lll^*>0$ is:\n \\[ g(\\lll^*) = {\\cal E}(\\boldsymbol{\\theta}^*) + \\sum_{i=1}^{N} \\lambda_i^* f_i(S_{\\boldsymbol{\\theta}^*}),\\]\n which yields the duality gap associated with primal-dual pair $(\\boldsymbol{\\theta}^*, \\lll^*)$:\n \\begin{equation}\n \\label{duality-gap-supp}\n {\\cal E}(\\boldsymbol{\\theta}^*) - g(\\lll^*) = - \\sum_{i=1}^{N} \\lambda_i^* f_i(S_{\\boldsymbol{\\theta}^*})\n \\end{equation}\n Now, to prove that this duality gap is upper-bounded by $N\/t$, we consider three cases for each term in the sum in \\eqref{duality-gap-supp} and verify that, for all the cases, we have $\\lambda_i^* f_i(S_{\\boldsymbol{\\theta}^*}) \\geq -\\frac{1}{t}$.\n \\begin{itemize}\n \\item $f_i(S_{\\boldsymbol{\\theta}^*}) \\leq -\\frac{1}{t^2}$: In this case, we can verify that $\\lambda_i^* f_i(S_{\\boldsymbol{\\theta}^*}) = -\\frac{1}{t}$ using the first line of \\eqref{implicit-dual-barrier-extension}.\n\n \\item $-\\frac{1}{t^2} \\leq f_i(S_{\\boldsymbol{\\theta}^*}) \\leq 0$: In this case, we have $\\lambda_i^* f_i(S_{\\boldsymbol{\\theta}^*}) = t f_i(S_{\\boldsymbol{\\theta}^*})$ from the second line of \\eqref{implicit-dual-barrier-extension}. As $t$ is strictly positive and $f_i(S_{\\boldsymbol{\\theta}^*}) \\geq -\\frac{1}{t^2}$, we have $t f_i(S_{\\boldsymbol{\\theta}^*}) \\geq -\\frac{1}{t}$, which means $\\lambda_i^* f_i(S_{\\boldsymbol{\\theta}^*}) \\geq -\\frac{1}{t}$.\n\n \\item $ f_i(S_{\\boldsymbol{\\theta}^*}) \\geq 0$: In this case, $\\lambda_i^* f_i(S_{\\boldsymbol{\\theta}^*}) = t f_i(S_{\\boldsymbol{\\theta}^*}) \\geq 0 > -\\frac{1}{t}$ because $t$ is strictly positive.\n \\end{itemize}\n\n In all the three cases, we have $\\lambda_i^* f_i(S_{\\boldsymbol{\\theta}^*}) \\geq -\\frac{1}{t}$. Summing this inequality over $i$ gives $- \\sum_{i=1}^N \\lambda_i^* f_i(S_{\\boldsymbol{\\theta}^*}) \\leq \\frac{N}{t}$. Using this inequality in \\eqref{duality-gap-supp} yields the following upper bound on the duality gap associated with primal $\\boldsymbol{\\theta}^*$ and implicit dual feasible $\\lll^*$ for the original inequality-constrained problem:\n \\[{\\cal E}(\\boldsymbol{\\theta}^*) - g(\\lll^*) \\leq N\/t\\]\n \\qed\n\n This bound yields sub-optimality certificates for feasible solutions of our approximation in \\eqref{log-barrier-extension-problem-supp}. If the solution $\\boldsymbol{\\theta}^*$ that we obtain from our unconstrained problem \\eqref{log-barrier-extension-problem-supp} is feasible, i.e., it satisfies\n constraints $f_i(S_{\\boldsymbol{\\theta}^*}) \\leq 0$, $i=1, \\dots, N$, then $\\boldsymbol{\\theta}^*$ is $N\/t$-suboptimal for the original inequality constrained problem: ${\\cal E}(\\boldsymbol{\\theta}^*) - {\\cal E}^* \\leq N\/t$. Our upper-bound result can be viewed as a generalization of the duality-gap equality for the standard log-barrier function \\cite{Boyd2004}. Our result applies to the general context of convex optimization. In deep CNNs, of course, a feasible solution for our approximation may not be unique and is not guaranteed to be a global optimum as ${\\cal E}$ and the constraints are not convex.\n\n \n \\ifCLASSOPTIONcompsoc\n \n \\section*{Acknowledgments}\n \\else\n \n \\section*{Acknowledgment}\n \\fi\n\n This work is supported by the National Science and Engineering Research Council of\nCanada (NSERC), via its Discovery Grant program. \n\n \n \n \\ifCLASSOPTIONcaptionsoff\n \\newpage\n \\fi\n\n \n \n \n \n \n \\bibliographystyle{ieee}\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nResponses of polyelectrolytes (PEs) to the changes in ionic environment and chain stiffness have been extensively studied in polymer sciences \\cite{OosawaBook,SkolnickMacro77,Barrat93EL,ha1995macromolecules,schiessel1999macromolecules}. \nHowever, new discoveries on the properties of PE are still being made through studies on biopolymers \\cite{Caliskan05PRL,moghaddam09JMB,liu2016BJ,emanuel2009PhysBiol}. \nAlso under active investigation are the effects of other variables and constraints on the higher order organization and dynamics of PE found in biological systems \\cite{needleman2004PNAS,hud2005ARBBS,2015Hyeon198102}.\n\nDemonstrated in both experiments and computational studies \\cite{2000Williams106,2006Hud8174,2001Stevens130,2001Lee3446,2005Muthukumar074905,2013Bachmann028103,2014Maritan064902}, \neven the conformational adaptation of a single PE chain can be highly complex. \nWhereas flexible PE chains form compact globules in the presence of counterions \\cite{schiessel1999macromolecules}, \nthe same condition drives semiflexible PE chains (e.g., dsDNA) to toroidal conformations or metastable rod-like bundles comprised of varying number of racquet structures. \nGeometrical constraints such as confinement \\cite{morrison2009PRE,spakowitz2003PRL} and increasing density of PE could add another level of complexity to the system.\nFor example, DNA chain in a viral capsid or nuclear envelope adopts a densely packed structure with the swelling due to the volume exclusion being suppressed by the confinement and counterions \\cite{2001Kenneth14925,2013Leforestier201,Berndsen14PNAS,2015Hyeon198102}. \nFurther, statistically distinct conformations of DNA emerge depending on the amount and type of counterions being added \\cite{2015Nordenskiold8512,yoo2016NatComm}.\n\n\nPE brush \\cite{1991Pincus2912,1994Zhulina3249}, \na spatial organization with one end of many PE chains densely grafted to 2D surface, is another system of interest to be studied. \nIn particular, the novel functions and adaptability discovered in biopolymer brush \\cite{2007Israelachvili1693} deserve much attention. \nFor example, brush layer of hyaluronic acid, a negatively charged flexible polysaccharide molecule serving as a main component of the extracellular matrix \\cite{2012Richter1466}, modulates the interaction between cells and their environment \\cite{2004Addadi1393}. \nThe brush of Ki-67, a disordered protein bearing a large positive net charge, found on the surface of mitotic chromosomes prevents aggregation of chromosomes \\cite{2016Daniel308}. \n\n\n\nMorphology of a polymer brush condensed in poor solvent has been studied by using theories and simulations for decades \n\\cite{1992Binder586,1993Murat3108,1993Williams1313,1998Zhulina1175,2005Pereira214908,2009Dobrynin13158,2010Szleifer5300,2014Sommer104911,2016Terentjev1894}. \nDepending on the chain stiffness, brush condensates adopt diverse morphological patterns that vary from semi-spherical octopus-like micelle domains to cylindrical bundles of rigid chains which protrude from the grafting surface. \nIt was shown that when the grafting density is in a proper range, multivalent counterions can collapse flexible PE brush even in \\emph{good} solvent into octopus-like surface micelles displaying substantial lateral heterogeneity \\cite{2016Tirrell284,2017Hyeon1579,2017dePablo155}, which has recently been confirmed experimentally for polystyrene sulfonate brush condensates in \\ce{Y(NO_3)_3} solution \\cite{2017Tirrell1497}.\n\nHowever, we note that the aforementioned studies on the formation of surface micelles from ion-induced collapse of PE brush are still at odds with the findings by Bracha \\emph{et al.} \\cite{2014BarZiv4945} which reported fractal-like dendrite domains as a result of multivalent counterion (\\ce{Spd^3+}) induced collapse of DNA brush.\nAlthough the previous studies on flexible PE brush \\cite{2016Tirrell284,2017Hyeon1579,2017dePablo155} captured a number of essential features reported by Bracha \\emph{et al.} \\cite{2014BarZiv4945}, \nqualitative difference in the morphology of brush condensate still exists, thus requiring further investigation.\nTo our knowledge, \nPE brush condensates with dendritic morphology remain unexplored both in theory and computation.\nTo this end, we extended our former work \\cite{2017Hyeon1579} to scrutinize the effect of semiflexibility of PE chain on the brush morphology and dynamics in trivalent counterion solution.\n\n\\begin{figure*}[t]\n\\centering\\includegraphics[width=0.6\\linewidth]{cfg_v2.pdf\n\\caption{\nModel and morphologies of the brush condensates at varying chain stiffness. \n(A) The polyeletrolyte brush was modeled by $16\\times16$ polymer chains, each carrying $N = 80$ negatively charged monomers, \ngrafted in a triangular lattice of spacings $d=16 a$ on the surface at $z=0$. \nIn the presence of trivalent cations (blue dots), \na pre-equilibrated brush forms mutiple bundles, and eventually fully collapses onto the surface. \nFor the sake of visual clarity, monovalent cations releasd from the chains, as well as monovalent anions added with trivalent cations, are not shown. \nIndividual chains are color-coded from blue to red based on their end-to-end distance $R_{ee}$.\n(B) Brush height $H$ and apparent persistence length $l_{p}$ of chains in the brush (see {\\bf Model and Methods}) normalized by $L(=Na)$ at different $\\kappa$ (main panel). \nSix snapshots of brush condensates obtained from simulations performed with different $\\kappa$ are depicted in the smaller panels.\n}\n\\label{cfg}\n\\end{figure*} \n\nIn this study, we adapted a well tested coarse-grained model of strong polyelectrolyte PE brush \\cite{2000Seidel2728,2003Stevens3855,2014Hsiao2900,2017Hyeon1579}. \nAs shown in Fig.~\\ref{cfg}A, total $M (= 16\\times16)$ PE chains were grafted to the triangular lattice on uncharged surface. \nEach PE chain consists of $N (= 80)$ negatively charged monomers and a neutral terminal monomer grafted to the surface.\nThe lattice spacing was selected to ensure the lateral overlap between neighboring chains. \nThe rigidity of PE chains was adjusted by varying the bending rigidity parameter $\\kappa$.\nWe added trivalent salts to the pre-equlibrated PE brush in salt-free condition, and induced the collapse into brush condensate. \nDetails of the model and simulation methods are given in {\\bf Model and Methods}.\nThe results of this work are organized such that we first address the overall morphology of brush condensate under 1:3 stoichiometric condition of trivalent counterion with respect to a monovalent charge on each monomer. \nNext, the local structure of brush chain is characterized by exploiting the liquid crystal order parameters. \nFinally, we investigate the dynamics of brush condensates and of condensed counterion at varying $\\kappa$ by calculating the intermediate scattering function. \n\n\n\n\n\\section{Results}\n{\\bf Morphology of brush condensates. }\nRegardless of the value of $\\kappa$, the PE brush fully collapses onto the grafting surface due to the osmotic pressure of ions, which differs from neutral semiflexible polymer brushes or salt-free PE brushes in poor solvent where the aggregated bundles protrude out of the grafting surface \\cite{2013Zippelius042601,2009Dobrynin13158,2014Sommer104911}. \nThe morphology of the condensate depends critically on $\\kappa$ (Fig.~\\ref{cfg}B).\n(i) For small $\\kappa$ ($\\alt 3$ $k_{B}T\/\\text{rad}^2$, $l_{p} < L\/10$),\nthe PE brush forms octopus-like surface micelle domains demarcated by the chain-depleted grain boundaries. \nThe average height of the brush $H$\nincreases with $\\kappa$ ($\\leq 3$ $k_{B}T\/\\text{rad}^2$). \nSo does the surface area of the domain projected onto the $xy$-plane (see also Fig.~S1).\n(ii) For large $\\kappa$ ($> 15$ $k_{B}T\/\\text{rad}^2$, $l_{p} > L\/2$), \nthe condensed chains are organized into a dendritic assembly. \nNeighboring chains are assembled together, forming axially coaligned branches of varying thickness. \nThe density of chain monomer slightly increases as the chain gets stiffer (see Fig.~S2A), \nwhich reduces brush height.\nIt is also noted that the end-to-end distance $R_{ee}$ of the collapsed polymers, color-coded from blue to red for individual chains, displays the broadest distribution at an intermediate stiffness $3 < \\kappa < 15$ $k_{B}T\/\\text{rad}^2$, \nwhich indicates that the conformational ensemble displays the most heterogeneous distribution in this range of $\\kappa$ (see also Fig.~S3A,B).\n\n\\begin{figure}[t]\n\\centering\\includegraphics[width=\\gsz\\linewidth]{static.pdf\n\\caption{Structure of brush condensates. \n(A) In-plane static structure factor $S_{xy}(k)$ as a function of wave number $k$ for brushes of different $\\kappa$. \nThe gray solid bar demarcates the range $2\\pi\/L_{x} \\leq k \\leq 2\\pi\/L_{y}$, i.e., the periodic boundary of the simulation box.\nThe red circles highlight the position of primary peak when $\\kappa \\leq 5$ $k_{B}T\/\\text{rad}^2$. \n(B) Area of the condensate on $z=0$ plane $n(r)$, as a function of a linear dimension $r$ with respect to its center. \nOne exemplary illustration is provided in the inset. \nFor visual clarity, the curves in (A,B) are shifted upward progressively.\n}\n\\label{static}\n\\end{figure}\n\nTo quantify the in-plane lateral configuration of the brush, we calculated the 2D static structure factor \n\\begin{equation}\nS_{xy}(k) = \\Big \\langle \\Big \\langle \\frac{1}{N_{m}} \\bigg \\arrowvert \\sum_{i,j=1}^{N_{m}} e^{i \\vec{k} \\cdot (\\vec{r}_i-\\vec{r}_{j})} \\bigg \\arrowvert\\Big \\rangle_{|\\vec{k}|} \\Big \\rangle,\n\\label{eq-ssf} \n\\end{equation} \nwhere $N_{m} = M \\times N$ is the total number of non-grafted chain monomers, \n$\\vec{r}_{i}$ is the position of the $i$-th monomer, and $\\vec{k}$ is a 2D wave vector in the $xy$ plane.\n$S_{xy}(k)$ is evaluated by first integrating over the space of $| \\vec{k} | = k$, followed by averaging over the ensemble of MD trajectories.\n$S_{xy}(k)$ exhibits distinct profiles when $\\kappa$ is varied (Fig.~\\ref{static}A).\nFor octopus-like micelles, there is a primary peak (indicated by red circles) characterizing the size (area) of the domain, \nwhose position shifts to a smaller wave number as $\\kappa$ increases, indicating that the domain size grows with $\\kappa$.\nHowever, this peak gradually vanishes as the stiffness of chain is increased. \nThe absence of the peak in $S_{xy}(k)$ is due to the morphological transition from the finite-sized surface micelles to the scale-free dendritic assembly.\n\nTo quantify the dendritic patterns in 2D, we further analyzed their fractal dimensions $\\mathcal{D}_{f}$. \nWe divide the grafting surface into square lattices with each cell size of $a \\times a$. \nWhen at least one chain monomer is present in a cell, the cell contributes to the ``area\" of the condensate. \nThe area of dendritic pattern within a radius $r$ is $n(r) =\\langle a^2 \\sum_{p,q} o_{p,q} \\Theta(r - r_{p,q})\\rangle$, \nwhere $\\Theta(\\ldots)$ is the Heaviside step function, $o_{p,q} = \\Theta[\\sum_{i=1}^{N_{m}} \\delta(x_{i} - pa)] \\times \\Theta[\\sum_{i=1}^{N_{m}} \\delta(y_{i} - qa)]$, \nand $r_{p,q}$ is the distance of the cell from a center of high monomer density. \n$n(r)$ was obtained by averaging over the cells with the five highest monomer density in each snapshot. \n\n$n(r)$ scales as $n(r) \\sim r^{\\mathcal{D}_f}$, and the value of the scaling exponent $\\mathcal{D}_{f}$ varies at different length scale (Fig.~\\ref{static}B).\n(i) In the range of $6 < r\/a < 40$, $\\mathcal{D}_f\\approx 1.35$ for brushes of rigid chains, \nwhereas $\\mathcal{D}_f\\approx 0$ for flexible brushes. \nThe transition from micelle domain with finite size ($\\mathcal{D}_f \\approx 0$) to scale-free dendritic assembly ($\\mathcal{D}_f> 0$) is observed at $\\kappa\\approx 5$ $k_{B}T\/\\text{rad}^2$ (see Fig.~\\ref{cfg}B).\n(ii) At $r\/a > 50$, $\\mathcal{D}_f \\approx 1.75$ for rigid brushes ($\\kappa > 15$ $k_{B}T\/\\text{rad}^2$), \nand $\\mathcal{D}_f\\approx 2$ for flexible brushes ($\\kappa \\leq 5$ $k_{B}T\/\\text{rad}^2$).\nThe scaling exponent $\\mathcal{D}_{f}\\approx 2$ arises when the monomer is uniformly distributed on the surface such that the density of condensates $\\rho_{m} = n(r)\/{\\pi r^2}$ is constant with respect to $r$. \nUnlike the octopus-like micelles surrounded by the chain-depleted zone, the dendritic condensate percolates over the entire surface.\nAnalyzing fluorescence images of dendritic condensate of dsDNA brush through a similar method \\cite{2014BarZiv4945}, \nBracha \\emph{et al.} reported $\\mathcal{D}_{f} = 1.75$.\n\nAnother quantity often being used to address the fractality is the 2D version of radial distribution function, \ndefined as $C_{xy}(r) = \\sum_{i>j} \\delta(r_{i,j}-r) \/ \\pi r^2 N_{m}$.\n$C_{xy}$ scales as $r^{\\mathcal{D}_{f} - 2}$ in a fractal aggregate of dimension $\\mathcal{D}_{f}$ \\cite{1986Sander789,2016Dossetti,2017Dossetti3523}. \nConsistent with the analysis of $n(r)$, \n$C_{xy}$ of chain monomers indeed follows a scaling $C_{xy} \\sim r^{1.35 - 2} = r^{-0.65}$ in the intermediate range of $r$ when $\\kappa$ is large (Fig.~S4C). \n\\\\\n\n\n\n{\\bf Local chain organization. }\nThe local structure of chains in the brush condensates also changes with $\\kappa$ (see the insets of Fig.~\\ref{bdOrder}A). \nWhen chains are flexible, the adjacent chains condensed to the same micelle appear highly entangled.\nIt is not visually clear whether two monomers close in space are in the same chain or in different chains. \nIn contrast, when chains are rigid, they are parallelly aligned, \nand the strong orientational correlation between consecutive bonds allows us to easily discern one chain from another. \nTo characterize the local ordering of polymer segments in the collapsed brush, \nwe employed the liquid crystal order parameter \\cite{1986Eppenga1776,2000Frenkel10034}.\nFor any two consecutive monomers ${i,i+1}$ in the same chain, \na unit bond vector $\\hat{b}_{i}$ is defined by its orientation $\\vec{u}_{i} = (\\vec{r}_{i+1} - \\vec{r}_{i}) \/ |\\vec{r}_{i+1} - \\vec{r}_{i}|$, \nand its position $\\vec{v}_{i} = (\\vec{r}_{i+1} + \\vec{r}_{i}) \/ 2$.\nThe radial distribution of such two bond vectors can be evaluated as \n\\begin{equation}\ng_{0}^{b}(r_{\\perp},r_{\\parallel}) = \\frac{\\sum_{i,j} \\delta(r_{ij,\\perp} - r_{\\perp}) \\delta(r_{ij,\\parallel} - r_{\\parallel})}{\\pi r_{\\perp}^2 r_{\\parallel} N_{b}},\n\\label{eg0}\n\\end{equation}\nwhere $\\vec{r}_{ij}^b = \\vec{v}_{j} - \\vec{v}_{i}$, $\\vec{r}_{ij,\\parallel} = \\vec{r}_{ij}^b \\cdot \\vec{u}_{i}^b$, \n$\\vec{r}_{ij,\\perp} = \\vec{r}_{ij}^b - \\vec{r}_{ij,\\parallel}$, and $N_b = M \\times N$ is the total number of bonds in the brush. \nThe vector $\\vec{r}_{ij}^{b}$, pointing from bond $\\hat{b}_{i}$ to another bond $\\hat{b}_{j}$, \nwas decomposed into the parallel and perpendicular components ($\\vec{r}_{ij,\\parallel}$ and $\\vec{r}_{ij,\\perp}$ ) \nwith respect to the orientation of $\\hat{b}_{i}$.\nThe heat map of $g_{0}^{b}(r_{\\perp},r_{\\parallel})$ in Fig.~\\ref{bdOrder}A, indicates that the bonds of flexible chains in a micelle are isotropically distributed. \nAs $\\kappa$ increases, density correlation first rises along the axis of $r_{\\parallel}$. \nBecause the effective attraction between monomers from neighboring chains increases with $\\kappa$ (Fig.~S4B), \nbond density correlation also appears on the $r_{\\perp}$ axis when $\\kappa > 10$ $k_{B}T\/\\text{rad}^2$.\n\n\\begin{figure}[tb]\n\\centering\\includegraphics[width=\\gsz\\linewidth]{bdorder.pdf\n\\caption{\nLocal chain organizations.\n(A) Heat map of the density distribution of bonds $g_{0}^{b}$ (Eq.\\ref{eg0}), and (B) orientation order parameter $g_{2}^{b}$ (Eq.\\ref{eg2}) as a function of $r_{\\perp}$ and $r_{\\parallel}$.\nFrom left to right panels, $\\kappa = 0,5,60$ $k_{B}T\/\\text{rad}^2$, respetively.\nInsets are snapshots of a small region, with a size of $32 a\\times 32 a$, in the corresponding brush condensates.\n(C) Inter-chain bond orientational order $g_{2}^{b}(r_{\\perp}^{\\ast},0)$ as a function of $\\kappa$, where $r_{\\perp}^{\\ast}$ is the position of the highest peak in $g_{0}^{b}(r_{\\perp},0)$. \n}\n\\label{bdOrder}\n\\end{figure}\n\nThe relative orientational correlation between bond vectors, which cannot be described by $g_{0}^{b}(r_{\\perp},r_{\\parallel})$ alone, is quantified by calculating \\cite{1986Eppenga1776,2000Frenkel10034}\n\\begin{equation}\ng_{2}^{b}(r_{\\perp},r_{\\parallel}) = \\frac {\\sum_{i,j} \\cos(2 \\theta_{ij}) \\delta(r_{ij,\\perp} - r_{\\perp}) \\delta(r_{ij,\\parallel} - r_{\\parallel})} {\\sum_{i,j} \\delta(r_{ij,\\perp} - r_{\\perp}) \\delta(r_{ij,\\parallel} - r_{\\parallel})},\n\\label{eg2}\n\\end{equation}\nwhere $\\theta_{ij}$ is the angle between $\\hat{b}_{i}$ and $\\hat{b}_{j}$, thus $\\cos(2 \\theta_{ij}) = (\\vec{u}_{i} \\cdot \\vec{u}_{j})^2 - 1$. \n$\\cos(2\\theta) \\leq 0$ if $\\pi\/4 \\leq \\theta \\leq 3\\pi\/4$.\nIn the case of an isotropic distribution, $g_{2}^{b*} = \\int_{0}^{\\pi} \\sin(\\theta) \\cos(2\\theta) d\\theta \/ \\int_{0}^{\\pi} \\sin(\\theta) d\\theta = -1\/3$. \nFor flexible chains with $\\kappa=0$ $k_{B}T\/\\text{rad}^2$ (Fig.~\\ref{bdOrder}B left), \nthe positive correlation arises only from their nearest neighboring bond along the chain, \nand $g_{2}^{b}$ converges to $-1\/3$ within a very short range ($r < 2a$).\nAt $\\kappa=5$ $k_{B}T\/\\text{rad}^2$, intra-chain bonds are well ordered, but on the $r_{\\perp}$ axis $g_{2}^{b} \\approx -1\/3$ when $r_{\\perp} > 2.5a$, \nwhich suggests that except for the nearest neighbors, the bonds from different chains are still poorly aligned.\nLastly, at $\\kappa=60$ $k_BT\/\\text{rad}^2$, $g_{2}^{b} (r_{\\perp},r_{\\parallel})> 0$ in both $r_{\\perp}$ and $r_{\\parallel}$ directions with $r_{\\perp}$, $r_{\\parallel}\\gg 1$, \nin agreement with the observation that rigid chains are bundled together forming the branches of the condensate.\n\nTo highlight the effect of $\\kappa$ on the local \\emph{inter}-chain organization in the condensate, \nwe plotted $g_{2}^{b}(r_{\\perp}^{\\ast},0)$ (Fig.~\\ref{bdOrder}C) against $\\kappa$, by considering it as a single-valued estimate of the inter-chain bond alignment, \nwhere $r_{\\perp}^{\\ast}$ is position of the highest peak of $g_{0}^{b}(r_{\\perp}^{\\ast},0)$ (see also Fig.~S5). \nIn the brush condensate, chains are randomly entangled with each other when $\\kappa \\leq 3$ $k_{B}T\/\\text{rad}^2$, \nbut they display nearly perfect alignment when $\\kappa > 30$ $k_{B}T\/\\text{rad}^2$.\nThis disorder-to-order ``transition\" takes place around $\\kappa \\approx 5$ $k_{B}T\/\\text{rad}^2$ (Fig.~\\ref{bdOrder}C).\n\\\\\n\n\\begin{figure}[tb]\n\\centering\\includegraphics[width=\\gsz\\linewidth]{dynamics.pdf\n\\caption{\nDynamic properties of brush condensate and counterions. \n(A) Normalized intermediate scattering function $f_{xy}(k,t) = F_{xy}(k,t) \/ F_{xy}(k,0)$ (Eq.\\ref{eq-isf}) of chain monomers at wave numbers $2\\pi\/k_{1} = 1.1$ $a$ and $2\\pi\/k_{3} = 3.6$ $a$. \n(B) Conformational relaxation time of chains $\\tau_{i} = \\int f_{xy}(k_{i},t) dt$ with different $\\kappa$.\n(C) Mean square displacement of trivalent cations, either trapped in the condensate or free in the bulk. Symbols have the same meanings as in (A).\n(D) Diffusion coefficients of trapped trivalent cations (3+) and chain monomers.\n}\n\\label{dynamics}\n\\end{figure}\n\n\\begin{figure*}[t]\n\\centering\\includegraphics[width=0.6\\linewidth]{sum.pdf\n\\caption{\nTime-averaged monomer density heat maps of PE brush condensates as a function of the bending rigidity parameter $\\kappa$ and the grafting distance $d$.\nVisually distinct morphologies are colored differently: homogeneous compact layer (black), octopus-like surface micelles (blue), \nsingle-chain tadpole-like condensate (green), dendritic domains and networks (red and purple).\nThose labeled with asterisks (*) depict the simulation results of a smaller brush ($M=4\\times4$) from our previous study \\cite{2017Hyeon1579}.\n}\n\\label{sum}\n\\end{figure*}\n\n{\\bf Dynamics of brush condensates. }\nIn order to quantify the dynamics of PE brush, \nwe calculated the intermediate scattering function, which is the density-density time correlation function (van Hove correlation function) in Fourier domain, \n\\begin{align}\nF_{xy}(k,t) = \\Big\\langle \\Big \\langle \\frac{1}{N_{m}} \\sum_{m=1}^{N_{m}} e^{i \\vec{k} \\cdot \\vec{r}_{m}(t+t_{0})} \\sum_{n=1}^{N_{m}} e^{-i \\vec{k} \\cdot \\vec{r}_{n}(t_{0})} \\Big \\rangle_{|\\vec{k}|} \\Big \\rangle_{t_{0}} \n\\label{eq-isf}\n\\end{align} \nwhere $\\langle \\langle \\ldots \\rangle_{|\\vec{k}|} \\rangle_{t_0}$ is an average over time $t_0$ and over the direction of a 2D wave vector $\\vec{k}$ of magnitude $k$.\nThe dynamics of brush chain at different length scales can be probed in terms of \n$F_{xy}(k,t)$ evaluated at different $k$ ($k_{i} = 2\\pi\/r_{i}^{\\ast}$ where $i=1$, 2, 3 and $r_{i}^{\\ast}\/a = 1.1$, 2.0, 3.6 are the positions of the three highest peaks in the radial distribution function of chain monomers (see Fig.~S4C,D)).\nThe normalized function $f_{xy}(k,t) = F_{xy}(k,t) \/ F_{xy}(k,0)$, with $k_{1}$ and $k_{3}$, are shown in Fig.~\\ref{dynamics}A, \nand the corresponding mean relaxation time $\\tau_{i} = \\int_{0}^{\\infty} f_{xy}(k_i,t) dt$ is presented in Fig.~\\ref{dynamics}B. \nAt a small length scale $k^{-1}_{1}$, $f_{xy}(k_1,t)$ decays to zero within the timescale of $t< 10 \\tau_{\\text{BD}}$, which implies that chain monomers are fluidic beyond this time scale. \nBut, compared to octopus-like micelle with $\\kappa=0$ $k_{B}T\/\\text{rad}^2$, \nthe dendritic assembly made of brush chains with $\\kappa=120$ $k_{B}T\/\\text{rad}^2$ displays $\\sim 14$-fold slower relaxation profile of $f_{xy}(t)$.\nThe relaxation becomes much slower at larger length scale $k^{-1}_{3}$, \nand $\\tau_{3}$ for rigid chain comprising the dendritic assembly is as long as our total simulation time ($\\sim \\mathcal{O}(10^3) \\tau_{\\text{BD}}$). \nWe also notice that the ratio of relaxation times, $\\eta_{i} = \\tau_{i}(\\kappa=120 \\text{ }k_BT\/\\text{rad}^2) \/ \\tau_{i}(\\kappa=0 \\text{ }k_BT\/\\text{rad}^2)$ at the three position of $r^{\\ast}_i$ (with $i=1$, 2, 3) takes an order of $\\eta_{3} > \\eta_{2} > \\eta_{1}$. \nThis is expected because the contribution from inter-chain relaxation to the total relaxation time is higher at $r_{3}$ than at $r_{1}$. \nA tight and well aligned chain organization at $\\kappa = 120$ $k_{B}T\/\\text{rad}^2$ further increases $\\tau_{3}$ in comparison to $\\tau_{1}$.\nFor the most rigid dendrite, $\\tau_{3}$ is $\\sim 60$-fold greater than that of the surface micelle formed by flexible PE brush.\n\n\n\n\nNext, the mobility of trivalent cations, either trapped in the condensate (within $\\lambda_{B}$ from the chains) or free in the bulk solution, were quantified using an ensemble- and time-averaged mean squared displacement, \n$\\text{MSD}(t) = \\langle \\langle |\\vec{r}_{i}(t+t_0) - \\vec{r}_{i}(t_0)| \\rangle_{t_0} \\rangle$, as shown in Fig.~\\ref{dynamics}C. \nWhen $\\kappa \\leq 5$ $k_{B}T\/\\text{rad}^2$, although trapped ions are mobile, \nMSD shows a long-time subdiffusive behavior because ions are confined in individual micelles \\cite{2017Tirrell1497} (Supplementary Movie 1).\nBy contrast, for $\\kappa > 10$ $k_{B}T\/\\text{rad}^2$, condensed ions can freely diffuse along the dendritic branches. \nAs a result, MSD grows linearly with time. \n\n\nThe diffusion coefficient of trapped trivalent cation, estimated using $D = \\text{MSD}(t) \/ 6 \\Delta t$ for $\\Delta t = 1\\times 10^{3}$ to $1.5 \\times 10^{3} \\tau_{\\text{BD}}$, is non-monotonic with $\\kappa$. \nThis change agrees with the change of the brush morphology where ions are confined.\nIn the micelle phase, micelle size grows with $\\kappa$, which provides larger space for the trapped ions to navigate. \nIn the dendrite phase, the effective attraction between neighboring chains, mediated by the counterions, increases with $\\kappa$ (see Figs.~S2A, ~S4B) and tightens the bundling of PE, which in turn reduces the mobility of the condensed ions. \nThe trapped trivalent ions diffuse $>$ 10-fold slower than those in the bulk, \nbut still $\\sim $ 100-fold faster than chain monomers in the dendritic assembly, \neven though the same value of bare diffusion coefficient was assumed for all ions and chain monomers.\nThe bundles of rigid chains form a network of ``highway\", on which the condensed trivalent ions freely diffuse (Supplementary Movie 2).\n\n\n\n\n\n\\section{Discussion}\n\n{\\bf Effect of grafting density on the morphology of brush condensates. } \nThe morphological transitions from the octopus-like surface micelles to the dendritic condensates are reminiscent of sol-to-gel transition. \nAnalogous to gelation transition, the ``bond probability\" $p$ can be tuned by changing either the chain stiffness ($\\kappa$) or grafting distance ($d$). \nBelow the gelation point ($pp_c$) the domains are all connected together, covering the entire space. \nWe further performed simulations of a semiflexible brush, at $\\kappa=10$ $k_{B}T\/\\text{rad}^2$, by varying the inter-chain spacing $d$ (see Fig.~\\ref{sum} and Fig.~S6). \nTime-averaged monomer density heat maps of PE brushes $\\langle \\rho(x,y) \\rangle$ (Fig.~\\ref{sum}) visualize how the morphology of brush condensates changes as a function of the chain bending rigidity parameter $\\kappa$ and the inter-chain spacing $d$. \n\nNotably, changes in $\\kappa$ and $d$ display qualitatively different effects on the morphologies below and above the ``gelation point.\" \nAt $d=16a$ with increasing $\\kappa$ (panels enclosed by the magenta boundary in Fig.\\ref{sum}), the initially sol-like micelles domain are percolated into gel-like dendritic pattern whose branches span the entire surface. \nIn contrast, when the chain stiffness is fixed to $\\kappa=10$ $k_BT\/\\text{rad}^2$ and grafting distance is varied from a large value ($d=32a$) to a small one ($d=8a$) (panels enclosed by the cyan boundary in Fig.\\ref{sum}), \nthe initial sol-like isolated domains are characterized by the heterogeneous condensates made of semiflexible chains, collapsed into toroids or rod-like bundles on site, not by the tadpole or octopus-like micelle condensates; and with decreasing $d$ the chains collapse and further assemble into a dendritic pattern and a non-uniform fractal-like meshwork layer. \n\\\\\n\n\n\n\n{\\bf Size of octopus-like surface micelle. } \nFor octopus-like surface micelle, a scaling argument is developed based on equilibrium thermodynamics \\cite{2017Hyeon1579}. \nThe domain size is determined by the balance between the surface tension resulting from the counterion-mediated attraction \nand the elastic penalty to stretch the grafted chains to form a surface micelle. \nWhen $\\kappa \\leq 3$ $k_{B}T\/\\text{rad}^2$, $l_{p}$ is small enough to approximate the individual PE as a flexible chain with Kuhn length $2 l_{p}$. \nFor an octopus-like domain containing $n$ chains within a surface area $\\sim R_{c}^2 \\simeq l_{p}^{2} (n N \/ l_{p})^{2 \\nu}$, \nthe surface energy is $F_{n,\\text{surf}} = \\xi k_{B}T l_{p}^{2} (n N a\/ l_{p})^{2 \\nu}$, \nwhere $\\xi$ sets the scale of attraction between chain segments and $\\nu$ is the Flory exponent. \nMeanwhile, the elastic penalty is $F_{n,\\text{el}} = n k_{B} T R_{c}^2 \/ l^2_pN_{s} = n k_{B} T R_{c} \/l_{p} = k_{B} T \\sigma R_{c}^{3} \/ l_{p}$,\nwhere $N_{s} = R_{c}\/l_{p}$ is the number of statistical segments in each chain to be stretched to reach the micelle, \nand $\\sigma = n\/R_{c}^2$ is the chain grafting density.\nThe total free energy per area in the octopus-like condensate with $n$ arms is \n\\begin{align}\n\\frac{f_{\\text{octo}}}{k_{B}T} &= \\frac{1}{k_BT}\\frac{F_{n,\\text{surf}}+F_{n,\\text{el}}}{R_c^2}\\nonumber\\\\\n&= \\frac{\\xi (\\sigma Na)^{2\\nu} l_{p}^{2-2\\nu}}{R_{c}^{2-4\\nu}} + \\frac{\\sigma R_{c}}{l_p}. \n\\label{eq-oct}\n\\end{align}\nMinimization of $f_{octo}$ with respect to $R_{c}$ provides the micelle size corresponding to a minimum free energy \n$R_c^* \\sim l_{p}^{\\frac{3-2\\nu}{3-4\\nu}} (Na)^{\\frac{2\\nu}{3-4\\nu}} \\sigma^{\\frac{2\\nu-1}{3-4\\nu}}$.\nFor $\\nu = 1\/3$, $R^*_{c}$ increases as $\\sim l_{p}^{7\/5}$ (thus with $\\kappa$), until neighboring micelles is about to overlap.\nBeyond this overlap point, the picture of isolated semispherical micelles no longer holds.\n\\\\\n\n{\\bf Fractal dimension of dendritic condensate. } \nIn the case of dendritic condensate, we found that $n(r)\\sim r^{\\mathcal{D}_f}$. \nIn particular, $\\mathcal{D}_{f} \\approx 1.75$, observed at large $r$ ($r\/a > 50$) was also reported in Bracha \\emph{et al.}'s experiment \\cite{2014BarZiv4945}. \nIncidentally, the morphology of aggregate changes depending on how trivalent salt is added \\cite{2014BarZiv4945}. \nThus, the formation of dendritic morphologies are effectively made under kinetic gelation rather than equilibrium one. \n \n\n\n\n\nThe premise that the process of dendritic assembly is kinetically controlled guides the direction of our theoretical analysis.\nSince the collapse is effectively irreversible and the bundles grow preferentially from the ``active front\" of the preexisting domain \\cite{1999Liu624},\nwe use the principle underlying the diffusion-limited aggregation \\cite{1981Sander1400} (DLA)\nto explain the observed fractal dimension. \nDLA describes a far-from-equlibrium growth phenomenon, \nwhere each particle diffuse with a fractal dimension $d_{w}$ until it binds irreversibly to any particles of the aggregate. \nA generalized Honda-Toyoki-Matsushita mean-field approximation \\cite{1984Kawasaki337,1986Kondo2618} suggests that, the fractal dimension of the aggregate is\n\\begin{equation}\n\\mathcal{D}_{\\text{MF}} = \\frac{d_s^2 + \\eta (d_{w}-1)}{d_s + \\eta (d_{w}-1)},\n\\label{eq-dla} \n\\end{equation} \nwhere in the presence of long-range attractive interactions the probability of growth at a certain position is assumed to be proportional to the gradient of a scalar field (e.g. monomer density) as $\\sim |\\nabla\\phi|^\\eta$. \nFor DLA ($\\eta = 1$, $d_w=2$) in 2 dimension ($d_s=2$), Eq.\\ref{eq-dla} gives $\\mathcal{D}_{\\text{MF}} =\\mathcal{D}_{f,\\text{DLA}} = 5\/3$.\nNumerical simulations report $\\mathcal{D}_{f,\\text{DLA}} = 1.71$ \\cite{2016Dossetti,2017Dossetti3523}. \n\nDLA has also been exploited to explain the dynamics and aggregation of a 3D gel-like network formed by rigid PE chains in a poor solvent \\cite{2017Asahi5991}. \nThe fractal nature of the dendritic pattern may well be an outcome of premature quenching of brush configuration to condensates during the competition \nbetween the gain in energy upon aggregation and the entropic gain of chain fluctuations.\n\\\\\n\n\n\n\n\\section{Concluding Remarks}\nCollapse of the brush condensate into either surface micelles \\cite{2017Tirrell1497} or a dendritic pattern \\cite{2014BarZiv4945} is controlled by the chain flexibility.\nFundamental differences are found in the the dynamics of chains and condensed ions as well as in the microscopic chain arrangement.\nThe new insights into the link between the micro-scale details and brush morphology will be of great use to design material properties and understand biological functions of PE brushes. \n\n\\section{Model and Methods}\n\\label{MaM}\n{\\bf Model and energy potential. } As in our previous study \\cite{2017Hyeon1579}, we used a well tested coarse-grained model of strong polyelectrolyte (PE) brush \\cite{2000Seidel2728,2003Stevens3855,2014Hsiao2900}. \nTotal $M (= 16\\times16)$ polymer chains were grafted to the uncharged surface of a 2D triangular lattice (Fig.~\\ref{cfg}A). \nThe lattice spacing $d$ was set to $16 a$, which is small enough to ensure the lateral overlap between neighboring chains, where $a$ is the diameter of chain monomers and ions.\nEach chain consists of $N (= 80)$ negatively charged monomers and a neutral terminal monomer grafted to the surface. \nThe simulation box has a dimension of \n$L_{x}\\times L_{y}\\times L_{z} = {(\\sqrt{M} d)} \\times {(\\sqrt{3M} d\/2)} \\times {(2 N a)} = 256 a \\times 128\\sqrt{3} a \\times 160 a$. \nPeriodic boundary conditions were applied along the $x$ and $y$ axes, \nand impenetrable neutral walls were placed at $z=0$ and $2 N a$. \n\nWe considered the following energy potentials to model a semiflexible PE brush in \\emph{good} solvents with multivalent salts. \nFirst, the distance between the neighboring chain monomers was constrained by a finite extensible nonlinear elastic bond potential \n\\begin{equation}\nU_{bond}(r) = -\\frac{k_{0} R_{0}^{2}}{2} \\log\\left(1-\\frac{r^{2}}{R_{0}^{2}}\\right),\n\\label{ub}\n\\end{equation}\nwith a spring constant $k_{0} = 5.83 ~k_{B}T\/a^2$ and a maximum extensible bond length $R_{0} = 2a$.\nSecond, the chain stiffness was modulated with an angular potential \n\\begin{equation}\nU_{angle}(\\theta) = \\kappa (\\theta - \\pi)^{2},\n\\label{ua}\n\\end{equation}\nwhere $\\kappa$ is the bending rigidity parameter and $\\theta$ is the angle between three consecutive monomers along the chain. \nThird, the excluded volume interaction was modeled in between ions and chain monomers \nby using the Weeks-Chandler-Andersen potential \n\\begin{equation}\nU_{excl}(r) = 4 \\epsilon \\left[\\left(\\frac{a}{r}\\right)^{12}-\\left(\\frac{a}{r}\\right)^{6}+\\frac{1}{4}\\right] \\Theta(2^{1\/6} a - r),\n\\label{ue}\n\\end{equation}\nin which $\\epsilon = 1 ~k_{B}T$ and $\\Theta(\\ldots)$ denotes a Heaviside step function.\nFourth, the Columbic interactions were assigned between charged particles $i$, $j$, which include both chain monomers and ions, \n\\begin{equation}\nU_{elec}(r) = \\frac{k_{B} T \\lambda_{B} z_{i} z_{j}}{r},\n\\label{uq}\n\\end{equation} \nwhere $z_{i,j}$ is the valence of charge. \nThe Bjerrum length is defined as $\\lambda_{B}=e^{2}\/(4 \\pi \\epsilon_{0} \\epsilon_{r} k_{B}T)$, \nwhere $\\epsilon_{0}$ is the vacuum permittivity and $\\epsilon_{r}$ is the relative dielectric constant of the solvent. \nLastly, the confinement of the wall was considered to repel any monomer, \nthat approaches the wall closer than $a\/2$ such that\n\\begin{equation}\nU_{wall}(z) = 4 \\epsilon \\left[\\left(\\frac{a}{z+\\Delta}\\right)^{12}-\\left(\\frac{a}{z+\\Delta}\\right)^{6}+\\frac{1}{4}\\right] \\Theta(a\/2 - z),\n\\label{uw}\n\\end{equation} \nwith $\\Delta = (2^{1\/6}-1\/2) a$. \nFor simplicity, we assume the same diameter $a$ for all the ions and chain monomers.\nFor dsDNA, the mean bond length ${\\langle b \\rangle} = 1.1 a$ $(\\approx a)$ in our model maps to \nthe effective charge separation ($\\approx 1.7$ {\\AA}) along the chain.\nConsidering $\\lambda_{B} = 7.1$ {\\AA} in water at room temperature, we set $\\lambda_{B}=4 a$ $(\\approx 7.1\/1.7\\times a)$.\nSince the focus of this study is on the effects of the bending rigidity of PE chain, $\\kappa$ in Eq.\\ref{ua} was adjusted in the range, $0\\leq \\kappa\\leq 120$ $k_{B}T\/\\textrm{rad}^2$. \n\\\\\n\n{\\bf Simulation. } \nFor conformational sampling of the brush, we integrated the Langevin equation at underdamped condition \\cite{1992Thirumalai695}, \n\\begin{align}\nm\\frac{d^2\\vec{r}_i}{dt^2}=-\\zeta_{\\text{MD}}\\frac{d\\vec{r}_i}{dt}-\\vec{\\nabla}_{\\vec{r}_i}U(\\vec{r}_1,\\vec{r}_2,\\ldots)+\\vec{\\xi}(t), \n\\end{align}\nusing a small friction coefficient $\\zeta_{\\text{MD}} = 0.1 m\/\\tau_{\\text{MD}}$ and a time step $\\delta t = 0.01 \\tau_{\\text{MD}}$, \nwith the characteristic time scale $\\tau_{\\text{MD}} = (ma^2\/{\\epsilon})^{1\/2}$. \nWe started from an initial configuration where polymer chains were vertically stretched, \nand monovalent counterions were homogeneously distributed in the brush region.\nThis salt-free brush was first equilibrated for the time of $10^4$ $\\tau_{\\text{MD}}$, then trivalent cations at a 1:3 stoichiometric concentration ratio \nwith respect to the polyelectrolyte charges \\cite{2017Hyeon1579} were randomly added together with its monovalent coions (anions) into the brush-free zone (see Fig.~\\ref{cfg}A). \nDepending on the value of $\\kappa$, \ntrivalent cations induce an immediate collapse or bundling of neighboring chains in the brush. \nIn the latter case, an intermediate bundle either merges into a thicker one with other bundles nearby, \nor collapses onto the grafting surface \\emph{irreversibly}.\nFor stiff chains with $\\kappa = 60$ or $120$ $k_{B}T\/\\textrm{rad}^2$, it takes longer than $ 10^{6}$ $\\tau_{\\text{MD}}$ \nbefore the whole brush collapses and the mean height of chains reaches the steady state. \nProduction runs was generated further for $5\\times10^{4} \\tau_{\\text{MD}}$. \nBrush configurations were collected every $50 \\tau_{\\text{MD}}$ for the analysis of static properties.\nUnless stated otherwise, all the conformational properties reported here were averaged over the ensemble of trajectories.\n\nTo probe the dynamics of condensates, \nwe performed Brownian dynamics (BD) simulations by integrating the following equation of motion \n\\begin{equation}\n\\frac{d \\vec{r}_{i}}{dt} = - \\frac{D_{i0}}{k_B T}{\\vec{\\nabla}}_{\\vec{r}_{i}} U(\\vec{r}_1,...,\\vec{r}_N) +\\vec{R}_{i}(t),\n\\end{equation}\nwhere $D_{i0}$ is the bare diffusion coefficient of the $i$-th particle, \nand $\\vec{R}_{i}(t)$ is the Gaussian noise satisfying $\\langle\\vec{R}_i(t)\\rangle=0$ and \n$\\langle \\vec{R}_{i}(t) \\cdot \\vec{R}_{j}(t')\\rangle = 6 D_{i0} \\delta_{ij} \\delta(t-t')$. \n$D_{i0}$ was estimated via $k_{B}T \/ 6 \\pi \\eta R$, where $\\eta = 0.89\\times 10^{3}$ Pa$\\cdot$s is the viscosity of water \nand $R$ is the hydration radius of all the particles. \nWe chose an integration time step $\\delta t_{\\text{BD}} = 2 \\times 10^{-4} \\tau_{\\text{BD}}$ \nwith the Brownian time $\\tau_{\\text{BD}} = a^{2}\/D_{i0}$ ($\\sim 4$ ns, assuming that $R \\sim 10$ {\\AA}).\nStarting from the last configuration of brush in MD simulations, the BD simulation was performed for $4 \\times 10^{3} \\tau_{\\text{BD}}$.\nSimulations were all carried out by using ESPResSo 3.3.1 package \\cite{2006Holm704,2013Holm1}. More details can be found in Ref.\\cite{2017Hyeon1579}. \n\\\\\n\n{\\bf Apparent persistence length of brush chain. } \nBy using a simplifying assumption that as an isolated semiflexible chain the correlation between bond vectors exponentially decays with the their separation along the chain ($g(s)=\\langle \\vec{u}_i\\cdot\\vec{u}_{i+s}\\rangle \\sim e^{-s\/l_p}$, where $\\vec{u}_i=(\\vec{r}_{i+1} - \\vec{r}_{i}) \/ |\\vec{r}_{i+1} - \\vec{r}_{i}|$) (Fig.~S7), \nwe quantified an ``apparent\" persistent length $l_{p}$. \n\n\n\n\n\n\n\n\\section{Supplementary Material}\nSupplementary material contains the Supplementary Figures S1 -- S7 and Supplementary Movies 1 and 2. \n\n\\begin{acknowledgments}\nWe thank the Center for Advanced Computation in KIAS for providing computing resources. \n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction\\label{intro}}\nThe first computer models of quantum systems based on the Bohm-de Broglie causal interpretation \\cite{B52,DEBR60} were developed by Dewdney in his Ph.D thesis (1983) \\cite{DEWD83}. These models include the two-slit experiment and scattering from square barriers and square wells. Some of the results appeared in earlier articles with Phillipides, Hiley (1979) \\cite{DEWD79} and with Hiley (1982) \\cite{DEWD82}. In later years, Dewdney developed a computer model of Rauch's Neutron interferometer (1982) \\cite{DEWD85} and, with Kyprianidis and Holland, models of a spin measurement in a Stern-Gerlach experiment (1986) \\cite{DEWD86}. He also went on, with Kyprianidis and Holland, to develop computer models of spin superposition in neutron interferometry (1987) \\cite{DEWD87} and of Bohm's spin version of the Einstein, Rosen, Podolsky experiment (EPR-experiment) (1987) \\cite{DEWDEPR87}. A review of this work appears in a 1988 {\\it Nature} article \\cite{DEWD88}. The computer models of spin were based on the 1955 Bohm-Schiller-Toimno causal interpretation of spin \\cite{BST55}. Home and Kaloyerou in 1989 reproduced the computer model of the two-slit interference experiment \\cite{K89} in the context of arguing against Bohr's Principle of Complementarity \\cite{BR28}. \n\nThough computer models of the two-slit experiment with each slit modeled by a one-dimensional Gaussian wave-packet have existed for many years, the extension to pinholes has never been made. We thought, therefore, that it might be interesting to attempt such an extension by modeling each pinhole by a two-dimensional Gaussian wave-packet. Though no new conceptual results are expected, we thought it might be interesting to see if the trajectories, now in three dimensional space, retain the characteristic features of the two-slit case. We shall see that a quantum potential structure is produced which guides the particles to the bright fringes as in the two-slit case. With the pinholes along the $x$-axis, trajectories in the $xy$-plane (see Fig. \\ref{OrAxes}) show the same interference behaviour as in the two-slit case, while trajectories in the $zy$-plane show no interference.\n\n\\section{The mathematical model\\label{MM}}\nPhillipides et al \\cite{DEWD79} derived the Gaussian function they used to model each of the two slits in the two-slit experiment using Feynman's path integral formulation. We, instead, have generalised the one dimensional Gaussian solutions of the Schr\\\"{o}dinger equation developed by Bohm in chapter three of his book \\cite{B51} to two-dimensions. Phillipides et al \\cite{DEWD79} considered Young's two-slit experiment for electrons and used the values of an actual Young's two-slit experiment for electrons performed by J\\\"{o}nsson in 1961 \\cite{jon61}. We will also model the interference of electrons and use J\\\"{o}nsson's values, except that we will vary slightly the distance between the pinholes and the detecting screen in order to obtain clearer interference or quantum potential plots. We will, in any case, give the values used for each case we consider. \n\nThe orientation of the axes and the position of the pinholes are shown in Figs. \\ref{OrAxes} and \\ref{figPos}, respectively.\nThe pinholes are represented by two dimensional Gaussian wave-packets $\\psi_1$ and $\\psi_2$ given by\n\\begin{eqnarray}\n\\psi_1(x,y,z,t)&=&A_1\\tilde{R}_{1}\\nonumber\\\\\n&&\\times\\exp\\left[{- \\frac{(x+x_0-v_x t)^2}{2\\Delta x_n^2}}\\right]\\exp\\left[{\\frac{i\\alpha t(x+x_0-v_x t)^2}{2\\Delta x_{n1}^2}}\\right]\\nonumber\\\\\n&&\\times\\exp\\left[{- \\frac{(z+z_0-v_z t)^2}{2\\Delta z_n^2}}\\right] \\exp\\left[{\\frac{i\\alpha t(z+z_0-v_z t)^2}{2\\Delta z_{n1}^2}}\\right]\\exp\\left[ik_x(x+x_0)\\right]\\nonumber\\\\\n&&\\times\\exp\\left[ik_z(z+z_0)\\right]\\exp\\left(ik_y y\\right)\\exp\\left[-i(\\omega_x+\\omega_z) t\\right],\\label{psi1}\n\\end{eqnarray}\n\\begin{figure}[h]\n\\unitlength=1in\n\\hspace*{1.3in}\\includegraphics[width=3.4in,height=1.8in]{figure1.jpg} \n\\caption{The orientation of the axes.\\label{OrAxes}}\n\\end{figure}\n\\begin{figure}[h]\n\\unitlength=1in\n\\hspace*{1.3in}\\includegraphics[width=3.4in,height=1.8in] {figure2.jpg} \n\\caption{Positions of the pinholes: The two pinholes, represented by the two-dimensional Gaussian wave-packets $\\psi_1$ and $\\psi_2$, are placed at $x_0=\\pm5\\times10^{-7}$ m and $z_0=0$ m. The width of the Gaussian packet on the negative $x$-side is $\\Delta x_{n0}=\\Delta z_{n0}=7\\times 10^{-8}$ m, while the width of the Gaussian packet on the positive $x$-side is $\\Delta x_{p0}=\\Delta z_{p0}=\\Delta x_{n0}$ or $\\Delta x_{p0}=\\Delta z_{p0}=2\\Delta x_{n0}$ for the case of unequal widths. The velocity components are $v_x=150\\;\\mathrm{ms}^{-1}$, $v_y=1.3\\times 10^{8}\\;\\mathrm{ms}^{-1}$ and $v_z=0\\;\\mathrm{ms}^{-1}$.\\label{figPos} }\n\\end{figure}\n\\newpage\n\\mbox{}\\\\\nand\n\\begin{eqnarray}\n\\psi_2(x,y,z,t)&=&A_2\\tilde{R}_{2}\\nonumber\\\\\n&&\\times\\exp\\left[{- \\frac{(x-x_0+v_x t)^2}{2\\Delta x_p^2}}\\right]\\exp\\left[{\\frac{i\\alpha t(x-x_0+v_x t)^2}{2\\Delta x_{p1}^2}}\\right]\\nonumber\\\\\n&&\\times\\exp\\left[{- \\frac{(z-z_0+v_z t)^2}{2\\Delta z_p^2}}\\right] \\exp\\left[{\\frac{i\\alpha t(z-z_0+v_z t)^2}{2\\Delta z_{p1}^2}}\\right]\\exp\\left[-ik_x(x-x_0)\\right]\\nonumber\\\\\n&&\\times\\exp\\left[-ik_z(z-z_0)\\right]\\exp\\left(ik_yy\\right)\\exp\\left[-i(\\omega_x+\\omega_z) t+i\\chi)\\right].\\label{psi2}\n\\end{eqnarray}\nThe various functions and constants used are given by:\n\\begin{eqnarray}\n&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\tilde{R}_{1}=\\cos^2\\frac{\\pi}{4},\\;\\;\\;\\tilde{R}_{2}=\\sin^2\\frac{\\pi}{4}\\;\\;\\;\\mathrm{(for\\; equal\\; amplitudes)},\\nonumber\\\\\n\\alpha&=&\\frac{\\hbar}{m},\\; \\;\\;v_x=\\frac{\\hbar k_x}{m}, \\;\\;\\;\\omega_x=\\frac{\\hbar k_x^2}{2m},\\;\\;\\;v_z=\\frac{\\hbar k_z}{m}, \\;\\;\\;\\omega_z=\\frac{\\hbar k_z^2}{2m},\\nonumber\\\\ \\nonumber\\\\\n&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\Delta x_{n0}=\\mathrm{width \\;of\\; the}\\;-x_0\\;\\mathrm{wavepacket}, \\;\\;\\;\\Delta z_{n0}=\\Delta x_{n0}\\nonumber\\\\\n&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\Delta x_{p0}=\\mathrm{width \\;of\\; the}\\;+x_0\\;\\mathrm{wavepacket}, \\;\\;\\;\\Delta z_{p0}=\\Delta x_{p0}\\nonumber\\\\\n&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!A_1(t)=A_{xn}(t)A_{zn}(t)= \\left( \\frac{2\\pi}{\\Delta x_{n0}^2+i\\alpha t} \\right)^{\\frac{1}{2}}\\left(\\frac{2\\pi}{\\Delta z_{n0}^2+i\\alpha t}\\right)^{\\frac{1}{2}} \\nonumber \\\\ \\nonumber\\\\\n&&\\;\\;=\\beta_{xn} (t)e^{i\\theta_{xn}(t)}\\beta_{zn}(t)e^{i\\theta_{zn}(t)}=\\beta_1e^{ i2\\theta_1}, \\label{AA11}\\\\\n&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!A_2(t)=A_{xp}(t)A_{zp}(t)= \\left( \\frac{2\\pi}{\\Delta x_{p0}^2+i\\alpha t} \\right)^{\\frac{1}{2}}\\left(\\frac{2\\pi}{\\Delta z_{p0}^2+i\\alpha t}\\right)^{\\frac{1}{2}} \\nonumber\\\\ \\nonumber\\\\\n&&\\;\\;=\\beta_{xp} (t)e^{i\\theta_{xp}(t)}\\beta_{zp}(t)e^{i\\theta_{zp}(t)}=\\beta_2 e^{ i2\\theta_2}, \\label{AA22}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\beta_{xn}(t)=\\beta_{zn}(t) =\\left( \\frac{4\\pi^2}{\\Delta x_{n0}^4+\\alpha^2 t^2}\\right)^{\\frac{1}{4}},\\;\\;\\;\\beta_{1}(t)=\\left( \\frac{4\\pi^2}{\\Delta x_{n0}^4+\\alpha^2 t^2}\\right)^{\\frac{1}{2}}\\nonumber\\\\ \\nonumber\\\\\n&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\theta_1=\\theta_{xn}(t)=\\theta_{zn}(t) =\\frac{1}{2} \\tan^{-1} \\left(-\\frac{\\alpha t}{\\Delta x_{n0}^2} \\right)+2k\\pi,\\;\\;\\;k=0,1,\\ldots,\\nonumber\\\\ \\nonumber\\\\\n&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\beta_{xp}(t)=\\beta_{zp}(t) =\\left( \\frac{4\\pi^2}{\\Delta x_{p0}^4+\\alpha^2 t^2}\\right)^{\\frac{1}{4}},\\;\\;\\;\\beta_{2}(t)=\\left( \\frac{4\\pi^2}{\\Delta x_{p0}^4+\\alpha^2 t^2}\\right)^{\\frac{1}{2}}\\nonumber\\\\ \\nonumber\\\\\n&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\theta_2=\\theta_{xp}(t)=\\theta_{zp}(t) =\\frac{1}{2} \\tan^{-1} \\left(-\\frac{\\alpha t}{\\Delta x_{p0}^2} \\right)+2k\\pi,\\;\\;\\;k=0,1,\\ldots,\\nonumber\\\\ \\nonumber\\\\\n&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\Delta x_n^2=\\Delta z_n^2 =\\left(\\Delta x_{n0}^2+\\frac{\\alpha^2 t^2}{\\Delta x_{n0}^2}\\right),\\;\\;\\;\\Delta x_{n1}^2=\\Delta z_{n1}^2=\\left(\\Delta x_{n0}^4+\\alpha^2 t^2\\right),\\nonumber\\\\\n&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\Delta x_p^2= \\Delta z_p^2 =\\left(\\Delta x_{p0}^2+\\frac{\\alpha^2 t^2}{\\Delta x_{p0}^2}\\right),\\;\\;\\;\\Delta x_{p1}^2=\\Delta z_{p1}^2=\\left(\\Delta x_{p0}^4+\\alpha^2 t^2\\right),\\nonumber\n\\end{eqnarray}\nFurther definitions and values of quantities used in the plots are given in Table \\ref{COMPP}.1. Note that the Gaussian wave packets are functions of $x$ and $z$, while the $y$-behaviour is represented by a plane wave. Plane waves are useful idealisations that are not realisable in practice. This leads to a computer model in which the intensity $R^2$ and quantum potential $Q$ at a given time maintain the same form from $y=-\\infty$ to $+\\infty$.\tHowever, both the intensity and quantum potential evolve in time so that an electron at each position of its trajectory sees an evolving intensity and quantum potential. \n\nThe total wave function is the sum of the two wave packets:\n\\begin{equation}\n\\psi=\\psi_1+\\psi_2\n\\end{equation}\n\nOur computer model is based on the Bohm-de Broglie causal interpretation and we refer the reader to Bohm's original papers for details of the interpretation \\cite{B52}. Here we will give only a very brief outline in order to introduce the elements we will need to develop the formulae and equations needed for the model. The interpretation is obtained by substituting $\\phi(x,y,z,t)=R(x,y,z,t)\\exp(iS(x,y,z,t\/\\hbar)$, where $R$ and $S$ are two fields which codetermine one another, into the Schr\\\"{o}dinger equation\n\\[\ni\\hbar\\frac{\\partial \\psi}{\\partial t}=-\\frac{\\hbar^2}{2m}\\nabla^2\\psi+V\\psi,\n\\]\nwhere $V=V(x,y,z,t)$. Differentiating and equating real and imaginary terms gives two equations. One is the usual continuity equation,\n\\begin{equation}\n\\frac{\\partial R^{2}}{\\partial t}+\\nabla\\cdot\\left(R^{2}\\frac{\\nabla S}{m}\\right)=0,\n\\end{equation}\nwhich expresses the conservation of probability $R^2$. The other is a Hamilton-Jacobi type equation\n\\begin{equation}\n -\\frac{\\partial S}{\\partial t}= \\frac{(\\nabla S)^{2}}{2m}+V+\\left (-\\frac{\\hbar^{2}}{2m}\\frac{\\nabla^{2} R}{R}\\right).\n\\end{equation}\nThis differs from the classical Hamilton-Jacobi equation by the extra term\n\\begin{equation}\nQ=-\\frac{\\hbar^{2}}{2m}\\frac{\\nabla^{2} R}{R},\n\\end{equation}\nwhich Bohm called the quantum potential. The classical Hamilton-Jacobi equation describes the behaviour of a particle with energy $E$, momentum $p$ and velocity $v$ under the action of a potential $V$, with the energy, momentum and velocity given by\n\\begin{eqnarray}\nE&=&-\\frac{\\partial S}{\\partial t}, \\nonumber\\\\\np&=&\\nabla S,\\nonumber\\\\\nv_p(\\mbox{$\\vec{r}$})&=&\\frac{d \\mbox{$\\vec{r}$}}{dt}=\\frac{\\nabla S}{m}. \\label{ENMOM}\n\\end{eqnarray}\nBohm retain's these definitions, while de-Broglie focused the third definition and called it the guidance formula. This allows quantum entities such as electrons, protons, neutrons etc. (but not photons\\footnote{The causal interpretation based on the Schr\\\"{o}dinger is obviously non-relativistic, but it is more than adequate for the description of the behaviour of electrons, protons, neutrons etc., in a large range of circumstances. This is not so for photons, the proper description of which requires quantum optics, which is based on the second-quantisation of Maxwell's equation. Photons, more generally, the electromagnetic field are described by the causal interpretation of boson fields, which includes the electromagnetic field \\cite{K85}.}) to be viewed as particles (always) with energy, momentum and velocity given by (\\ref{ENMOM}). Particle trajectories are found by integrating $v(\\mbox{$\\vec{r}$})$ given in (\\ref{ENMOM}). The extra $Q$ term produces quantum behaviour such as the interference of particles (which is what we will model in this article). Strictly, since the $R$ and $S$-fields codetermine one another, the $S$-field is as much responsible for quantum behaviour as the $R$-field; the $S$-field through the guidance formula, and the $R$-field through the quantum potential. \n\nThe Born probability rule is an essential interpretational element that links theory with experiment. As such it will remain a part of any interpretation of the quantum theory. This is certainly true for the causal interpretation, where probability enters because the initial positions of particles cannot be determined precisely. Instead, initial positions are given with a probability found from the usual probability density $|\\psi(x,y,z,t=0)|^2=R(x,y,z,t=0)^2$.\n\nThe results of the usual interpretation are identical with those of the causal interpretation as long as the following assumptions are satisfied:\n \\begin{enumerate}\n \\item[(1)] The $\\psi$-field satisfies Schr\\\"{o}dinger equation.\n \\item[(2)] Particle momentum is restricted to $\\mbox{$\\vec{p}$}=\\nabla S$.\n \\item[(3)] Particle position at time $t$ is given by the\n probability density $|\\psi(\\mbox{$\\vec{r}$},t)|^2$.\n \\end{enumerate}\n\nTo obtain the intensity, $Q$ and trajectories we must first find the $R$ and $S$-fields defined by $\\psi=Re^{iS\/\\hbar}$ in terms of $R_1$, $R_2$, $S_1$ and $S_2$ defined by $\\psi_1=A_1 R_1e^{iS'_1\/\\hbar} =\\beta_1R_1e^{iS_1\/\\hbar}$ and $\\psi_2=A_2 R_2e^{iS'_2\/\\hbar} =\\beta_2 R_2e^{iS_2\/\\hbar}$. We do this by first noting Eq. (\\ref{AA11}) for $A_1$ and Eq. (\\ref{AA22}) for $A_2$ and then comparing $\\psi_1=A_1 R_1e^{iS'_1\/\\hbar} =\\beta_1R_1e^{iS_1\/\\hbar}$ and $\\psi_2=A_2 R_2e^{iS'_2\/\\hbar} =\\beta_2 R_2e^{iS_2\/\\hbar}$ with Eqs. (\\ref{psi1}) and (\\ref{psi2}) to get\n\\begin{eqnarray}\nR_1(x,z,t)&=& \\beta_1 \\tilde{R}_{1}\\exp\\left[{-\\frac{(x+x_0-v_x t)^2}{2\\Delta x_n^2}}\\right]\\exp\\left[{- \\frac{(z+z_0-v_z t)^2}{2\\Delta z_n^2}}\\right] \\nonumber\\\\\nR_2(x,z,t)&=& \\beta_2 \\tilde{R}_{2}\\exp\\left[{-\\frac{(x-x_0+v_x t)^2}{2\\Delta x_p^2}}\\right] \\exp\\left[{- \\frac{(z-z_0+v_z t)^2}{2\\Delta z_p^2}} \\right] \\nonumber\\\\\nS_1(x,y,z,t)&=&{\\frac{\\hbar\\alpha t(x+x_0-v_x t)^2}{2\\Delta x_{n1}^2}}+{\\frac{\\hbar\\alpha t(z+z_0-v_z t)^2}{2\\Delta z_{n1}^2}}\\nonumber\\\\\n&& + \\hbar k_x(x+x_0)+\\hbar k_z(z+z_0)+\\hbar k_y y-\\hbar(\\omega_x+\\omega_z) t+2\\hbar\\theta_1\\nonumber \\nonumber\\\\\nS_2(x,y,z,t)&=&\\frac{\\hbar\\alpha t(x-x_0+v_x t)^2}{2\\Delta x_{p1}^2}+\\frac{\\hbar\\alpha t(z-z_0+v_z t)^2}{2\\Delta z_{p1}^2}\\nonumber\\\\\n&&-\\hbar k_x(x-x_0)-\\hbar k_z(z-z_0)+\\hbar k_y y-\\hbar(\\omega_x+\\omega_z) t+\\hbar\\chi+2\\hbar\\theta_2. \\nonumber\\\\\n\\end{eqnarray}\nThe intensity (probability density) is easily found from $|\\psi|^2=R^2$:\n\\begin{equation}\nR^2= R_1^2+R_2^2+2R_1^2R_2^2\\cos\\left( \\frac{S_1-S_2}{\\hbar}\\right).\\label{RSQ}\n\\end{equation}\nThe quantum potential ($Q$) is found from:\n\\begin{eqnarray}\nQ&=&-\\frac{\\hbar^{2}}{2m}\\frac{\\nabla^{2} R}{R}\\nonumber\\\\\n&=&-\\frac{\\hbar^{2}}{2m}\\frac{1}{R}\\left(\\frac{\\partial^2 R}{\\partial x^2}+ \\frac{\\partial^2 R}{\\partial y^2}+ \\frac{\\partial^2 R}{\\partial z^2} \\right)=Q_x+Q_y+Q_z,\\nonumber\n\\end{eqnarray}\nwhere\n\\begin{equation}\nQ_x=-\\frac{\\hbar^{2}}{2mR}\\frac{\\partial^2 R}{\\partial x^2}=\\frac{\\hbar^{2}}{8mR^4}\\left(\\frac{\\partial R^2}{\\partial x}\\right)^2 -\\frac{\\hbar^{2}}{4mR^2}\\frac{\\partial^2 R^2}{\\partial x^2}, \\label{QPF}\n\\end{equation}\nwith similar formulae for $Q_y$ and $Q_z$. Substituting Eq. (\\ref{RSQ}) into the formulae for $Q_x$ and $Q_y$ and differentiating gives $Q_y=0$ and\n\\begin{eqnarray}\nQ_x&=&- \\frac{\\hbar^2}{4mR^4} \\left[ \\frac{(x+x_0-v_x t)}{\\Delta x_n^2}R_1^2+\\frac{(x-x_0+v_x t)}{\\Delta x_p^2}R_2^2 \\right. \\nonumber\\\\\n&&+ \\left. R_1^2 R_2^2 \\left[ \\left( \\frac{(x+x_0-v_x t)}{\\Delta x_n^2}+\\frac{(x-x_0+v_x t)}{\\Delta x_p^2} \\right) \\cos(S_{12}) +S_{x12}\\sin(S_{12})\n \\right]\\right]^2\\nonumber\\\\\n&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!-\\frac{\\hbar^2}{2mR^2} \\left[ R_1^2 \\left( \\frac{2(x+x_0-v_x t)^2}{\\Delta x_n^4}-\\frac{1}{\\Delta x_n^2} \\right) + R_2^2\\left(\\frac{2(x-x_0+v_x t)^2}{\\Delta x_p^4}-\\frac{1}{\\Delta x_p^2} \\right) \\right] \\nonumber\\\\\n&&\\;\\;-\\frac{\\hbar^2R_1 R_2}{2mR^2} \\left( \\frac{(x+x_0-v_x t)^2}{\\Delta x_n^4}-\\frac{1}{\\Delta x_n^2}+2\\frac{(x+x_0-v_x t)(x-x_0+v_x t)}{\\Delta x_n^2 \\Delta x_p^2} \\right. \\nonumber\\\\\n&&\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;+ \\left. \\frac{(x-x_0+v_x t)^2}{\\Delta x_p^4} - \\frac{1}{\\Delta x_p^2} \\right)\\cos(S_{12}) \\nonumber\\\\\n&&\\;\\,-\\frac{\\hbar^2R_1 R_2}{2mR^2} \\left[ 2\\left( \\frac{(x+x_0-v_x t)}{\\Delta x_n^2}+\\frac{(x-x_0+v_x t)}{\\Delta x_p^2} \\right)S_{x12} \\sin(S_{12}) \\right. \\nonumber\\\\\n&&\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; \\left. - S_{xx12}\\sin(S_{12}) - S_{x12}^2 \\cos(S_{12}) \\right]\\label{QPX}\n\\end{eqnarray}\nwhere \n\\begin{eqnarray}\n&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\! S_{12}=S_1-S_2 \\nonumber\\\\\n&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\! S_{x12}= \\frac{\\alpha t(x+x_0-v_x t)}{\\Delta x_{n1}^2} - \\frac{\\alpha t(x-x_0+v_x t)}{\\Delta x_{p1}^2} + 2k_x\\nonumber\\\\\n&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\! S_{xx12}= \\frac{\\alpha t}{\\Delta x_{n1}^2} - \\frac{\\alpha t}{\\Delta x_{p1}^2}\\nonumber\n\\end{eqnarray}\nThe formulae for $Q_z$ is identical to that of $Q_x$, except that $x$ is replaced by $z$ everywhere it appears.\n\nThe trajectories, as we have said, are found by integrating Eq. (\\ref{ENMOM}). Therefore, to find the trajectories we do not need to find $S$, only its derivatives with respect to $x,y,z$. This can be done using the formula\n\\begin{equation}\n\\nabla S=\\frac{\\hbar}{2i}\\left( \\frac{\\nabla \\psi}{\\psi}-\\frac{\\nabla \\psi*}{\\psi*} \\right).\n\\end{equation}\nWe get\n\\begin{equation}\n\\frac{\\partial S}{\\partial y}=\\hbar k_y,\n\\end {equation}\nand\n\\begin{eqnarray}\n\\frac{\\partial S}{\\partial x}&=&\\frac{\\hbar}{R^2}\\left[ \\frac{x}{\\Delta x_n^2\\Delta x_p^2 }R_1 R_2\\sin(S_{12}) (\\Delta x_n^2-\\Delta x_p^2) \\right. \\nonumber\\\\\n&&+ \\frac{\\alpha t x}{\\Delta x_{n1}^2\\Delta x_{p1}^2}\\left[ R_1^2\\Delta x_{p1}^2 + R_2^2\\Delta x_{n1}^2 + R_1 R_2\\cos(S_{12}) \\right](\\Delta x_{n1}^2+\\Delta x_{p1}^2) \\nonumber\\\\\n&&\\;- \\frac{(x_0-v_xt)}{\\Delta x_n^2\\Delta x_p^2 } R_1 R_2\\sin(S_{12}) (\\Delta x_n^2+\\Delta x_p^2) \\nonumber\\\\\n&&\\;+ \\frac{(\\alpha t x_0-\\alpha t^2v_x)}{\\Delta x_{n1}^2\\Delta x_{p1}^2}\\left[R_1^2\\Delta x_{p1}^2 - R_2^2\\Delta x_{n1}^2 + R_1 R_2\\cos(S_{12}) \\right](\\Delta x_{p1}^2-\\Delta x_{n1}^2) \\nonumber\\\\\n&&\\;\\;\\;+ k_x( R_1^2 - R_2^2) \\nonumber\\\\\n\\end{eqnarray}\nThe $z$-derivative $\\frac{\\partial S}{\\partial z}$ is identical to $\\frac{\\partial S}{\\partial x}$, except that $x$ is everywhere replaced by $z$. From Eq. (\\ref{ENMOM}),\n\\begin{equation}\nv_p=v_{px}\\hat{i}+v_{py}\\hat{j}+v_{pz}\\hat{k}=\\frac{dx}{dt}\\hat{i}+\\frac{dy}{dt}\\hat{j}+\\frac{dz}{dt}\\hat{k}=\\frac{1}{m}\\left( \\frac{\\partial S}{\\partial x}\\hat{i}+\\frac{\\partial S}{\\partial y}\\hat{j}+\\frac{\\partial S}{\\partial z}\\hat{k} \\right),\n\\end{equation}\nwe see that to obtain the electron trajectories $\\mbox{$\\vec{r}$}(t)=x(t)\\hat{i}+y(t)\\hat{j}+z(t)\\hat{k}$, we must solve the following differential equations with various initial conditions:\n\\begin{eqnarray}\n\\frac{dx(t)}{dt} =\\frac{1}{m}\\frac{\\partial S}{\\partial x}, \\label{DXDT}\\\\\n\\frac{dy(t)}{dt} =\\frac{1}{m}\\frac{\\partial S}{\\partial y}, \\label{DYDT}\\\\\n\\frac{dz(t)}{dt} =\\frac{1}{m}\\frac{\\partial S}{\\partial z}. \\label{DZDT}\n\\end{eqnarray}\nNote that the components of the particle veloctiy $v_p$ are different from the velocities of the wave packets $v_x$, $v_y$ and $v_z$. Eq. (\\ref{DYDT}) can be solved immediately to give $y(t)=\\hbar k_y t$. Eqs. (\\ref{DXDT}) and (\\ref{DZDT}) are coupled non-linear differential equations. These were solved numerically using a Fortran program we wrote based on an adapted fourth-order Runge-Kutta algorithm \\cite{BF89} with fixed step size. This completes the various elements of he mathematical model. In the following section we show the various plots. \n \\section{The Computer plots\\label{COMPP}}\nFor the sake of comparison, we first reproduce plots of the intensity, $Q$ and trajectories for the two-slit experiment modeled by one-dimensional Gaussian wave-packets. These are shown in Figs. \\ref{INT1DG}, \\ref{QP1DG}, and \\ref{TRAJ1DG}. \\\\ \\mbox{}\\\\\n\\begin{center}\n\\begin{tabular}{|l|} \\hline \n\\hspace*{.1in}{\\bf Table \\ref{COMPP}.1} Definition and values of the quantities used in the plots. \\hspace*{0.4cm}\\\\ \n\\end{tabular}\n\\begin{tabular}{|l|l|l|} \\hline\\hline \n\\textbf{Quantity}&\\hspace*{.8in}\\textbf{Definition} & \\textbf{Value} \\\\ \\hline\\hline \n$b$ & Angle for equal amplitudes & $\\pi\/4$\\\\ \\hline\n$b$ & Angle for unequal amplitudes & $\\arccos(1\/\\sqrt{4})$ \\\\ \\hline\n$\\tilde{R_1}$& Amplitude of $\\psi_1$ &$\\cos^2 (b)$ \\\\ \\hline\n$\\tilde{R_2}$& Amplitude of $\\psi_2$ &$\\sin^2 (b)$\\\\ \\hline\n$x_0$ & $x$-distance of the center of the& $5\\times 10^{-7}$ m\\\\ \n & pinhole from the origin & \\\\ \\hline\n$z_0$ & $z$-distance of the center of the& $0$ m \\\\ \n & pinhole from the origin & \\\\ \\hline\n$h$ & Planck's constant & $6.62607004\\times10^{-34}$ Js \\\\ \\hline\n$\\hbar$ & Planck's constant\/$2\\pi$ & $1.05457180\\times10^{-34}$ Js \\\\ \\hline\n$m$ & mass of electron & $9.10938356*10^{-31}$ kg \\\\ \\hline\n$\\alpha$ & $\\hbar\/m$ & $0.00011576764$ Jsm$^{-1}$ \\\\ \\hline\n$k_x$ & Magnitude of $x$-wavenumber & $1.295698717\\times10^6$ m$^{-1}$ \\\\ \\hline\n$k_y$ & Magnitude of $y$-wavenumber & $1.122938132\\times10^{12}$ m$^{-1}$ \\\\ \\hline\n$k_z$ & Magnitude of $z$-wavenumber & $0$ \\\\ \\hline\n$v_x$ & $x$-component of the velocity & $\\alpha k_x=150$ ms$^{-1}$ \\\\ \n & of the Gaussian wave-packet & \\\\ \\hline\n$v_y$ & $y$-component of the velocity & $\\alpha k_y=1.3\\times 10^8$ ms$^{-1}$ \\\\ \n & of the Gaussian wave-packet & \\\\ \\hline\n$v_z$ & $z$-component of the velocity & $\\alpha k_z=0$ ms$^{-1}$ \\\\ \n & of the Gaussian wave-packet & \\\\ \\hline\n$\\omega$ & Angular frequency $\\omega$ & $\\hbar(k_x^2+k_y^2)\/2m$ \\\\ \\hline\n$\\chi$ & phase shift of $\\psi_2$ & $0$ \\\\ \\hline\n$\\Delta x_{n0}=\\Delta z_{n0}$ & Width of the $-x_0$ wave-packet & $\\Delta x_{n0}=7\\times 10^{-8}$ m \\\\ \\hline\n$\\Delta x_{p0}=\\Delta z_{p0}$ & Width of the $+x_0$ wave-packet & $\\Delta x_{p0}=\\Delta x_{n0}$ \\\\ \\hline\n$\\Delta x_{n0}=\\Delta z_{n0}$ & Width of the $-x_0$ wave-packet & $\\Delta x_{n0}=7\\times 10^{-8}$ m \\\\ \n & for unequal pinhole widths & \\\\ \\hline\n$\\Delta x_{p0}=\\Delta z_{p0}$ & Width of the $+x_0$ wave-packet & $\\Delta x_{p0}=2\\Delta x_{n0}$ \\\\ \n & for unequal pinhole widths & \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\newpage\n\\begin{figure}[h]\n\\unitlength=1in\n\\hspace*{0.7in}\\includegraphics[width=4.6in,height=2.3in]{figure3.jpg}\n\\caption{Two orientations of the intensity in a two-slit interference experiment modeled by one-dimensional Gaussian wave-packets.\\label{INT1DG}}\n\\end{figure}\n\n\\begin{figure}[h]\n\\unitlength=1in\n\\hspace*{0.7in}\\includegraphics[width=4.6in,height=2.3in] {figure4.jpg} \n\\caption{Two orientations of the quantum potential in a two-slit interference experiment modeled by one-dimensional Gaussian wave-packets.\\label{QP1DG}}\n\\end{figure}\n\\begin{figure}[h]\n\\unitlength=1in\n\\hspace*{1.4in}\\includegraphics[width=3in,height=2.6in] {figure5v3_3.pdf} \n\\vspace*{0.0in\n\\caption{The Trajectories in a two-slit interference experiment modeled by a one-dimensional Gaussian wave-packet.\\label{TRAJ1DG}}\n\\end{figure}\nThe quantities used for the two-pinhole experiment plots are given in Table 3.1. Note that the slit width used by J\\\"{o}nsson is $2\\times 10^{-7}$ m. The width of the Gaussian \nwavepackets $\\psi_1$ and $\\psi_2$ are defined at half their amplitude. We have chosen the widths of $\\psi_1$ and $\\psi_2$ to be $\\Delta x_{n0}=\\Delta z_{n0}=\\Delta x_{p0}=\\Delta z_{p0}=7\\times 10^{-8}$ m so that the width of the base of $\\psi_1$ and $\\psi_2$ approximately corresponds to $2\\times10^{-7}$ m. For unequal pinhole widths $\\Delta x_{p0}=\\Delta z_{p0}=2\\Delta x_{n0}$. We will consider three configurations: (1) Equal pinhole widths and equal amplitudes, (2) Equal pinhole widths and unequal amplitudes and (3) Unequal pinhole widths and equal amplitudes. \n\nIn the two-dimensional case, i.e., pinhole case, the intensity $R^2$ and quantum potential $Q$ are functions of four variables $x,y,x$ and $t$. To produce plots we note that because the $y$-behaviour is represented by a plane-wave, $Q_y=0$, while $Q_x$ and $Q_z$ depend only on $x$, $z$ and $t$. Similarly, the intensity does not depend on $y$. This means that the values of $R^2$ and $Q$ in the $xy$-plane at a given instant of time are the same from $y=-\\infty$ to $y=+\\infty$, as mentioned in \\S \\ref{MM}. At a later instant, the form of $R^2$ and $Q$ change instantaneously from $y=-\\infty$ to $y=+\\infty$. This unphysical behaviour is due to the use of the plane-wave idealisation to represent the $y$-behaviour. A more realistic picture would be to also use a Gaussian in the $y$-direction. However, as we shall see, the model produces a realistic picture of particle trajectories which depend on $x,y,z,t$. Since the quantum potential and intensity change in time, the electron `sees' evolving values of these quantities. All the plots below show what the electron `sees' at a particular instant of time $t$ and a particular position $x,y,z$.\n\nTo graphically represent $R^2$ and $Q$ we proceeded two ways. First, we produced animations of $R^2$ and $Q$. We produced animations of six frames, so that the sequence of frames is short enough to be reproduced in this article. In any case, our commuter did not have enough memory to produce animations of more than six frames. The animations are produced in the $xz$-plane and show the form of intensity and quantum potential that the electron `sees' at each instant of time as it moves along its trajectory. Second, we produced animations of density plots in the $xz$-plane. These results are presented by placing three two-dimensional $xz$-slices (three frames of the animation) along the $t$-axis, i.e., we pick out three slices of a fully three dimensional density plot. \n\n\\subsection{Computer plots for equal pinhole widths and equal amplitudes}\n\n\\begin{figure}[h]\n\\unitlength=1in \n\\hspace*{1.1in}\\includegraphics[width=4in,height=5.33in] {figure6.jpg} \n\\caption{The intensity in a two-pinhole interference experiment with equal widths and equal amplitudes modeled by two-dimensional Gaussian wave-packets with equal widths and equal amplitudes.\\label{INT2DG}}\n\\end{figure}\nThe animation sequence for the intensity $R^2$ for equal widths and equal amplitudes (EQEA) is shown in Fig. \\ref{INT2DG}. The animation ranges are $x=-3.5\\times 10^{-6}$ to $3.5\\times 10^{-6}$ m, $z=-3.5\\times 10^{-6}$ to $3.5\\times 10^{-6}$ m, $R^2 = 0$ to $1.8\\times 10^{-11}$ Jm$^{-2}$s$^{-1}$ and $t=0$ to $1.5\\times 10^{-9}$ s. The plots show the time evolution of the intensity in the $xz$-plane. The first frame shows the Gaussian peaks at the two pinholes. Frames two and three show the spreading Gaussian packets beginning to overlap and also show the beginning of the formation of interference fringes. Frames four to six show the time evolution of distinct interference fringes. Frame six shows the intensity distribution at time $t=1.5\\times 10^{-9}$ s which corresponds to a pinhole screen to detecting screen separation of $y = 0.195$ m given that the electron velocity in the y-direction is $v_y = 1.3 \\times 10^{-8}$ ms$^{-1}$. Our pinhole screen and detecting separation therefore differs from that in J\\\"{o}nssons experiment which was $0.35$ m corresponding a time evolution of $t=0$ to $2.6923\\times 10^{-9}$ s. We chose this time in order to show the beginnings of the overlap of the Gaussian wave packets. Using the J\\\"{o}nsson time of $t=0$ to $2.6923\\times 10^{-9}$ s resulted in a clear interference pattern in the second frame, missing out the early overlap.\n\nWe can make an approximate calculation of the visibility of the central fringe by taking readings from `face-on' plots, i.e., plots with the $xz$-plane in the plane of the paper (not shown here). Readings can be taken from the plots shown by taking due consideration of the orientation, but even then, readings are less accurate than with face-on plots. Similarly, to calculate the visibility of the interference fringes for the case of unequal amplitudes and for the case of unequal widths, readings are taken from face-on plots not included here. The visibility of the central fringe for the EWEA case is:\n\\[\nV_{EWEA}=\\frac{I_{max}-I_{min}}{I_{max}+I_{min}}=\\frac{10\\times 10^{10}-0}{10\\times 10^{10}+0}=1.\n\\]\n\\begin{figure}[h]\n\\unitlength=1in \n\\hspace*{1.0in}\\includegraphics[width=4in,height=3in] {figure7.jpg} \n\\caption{A sequence of density plots (3 slices of a 3D-plot) of the intensity $R^2$ in a two-pinhole interference experiment with equal widths and equal amplitudes modeled by two-dimensional Gaussian wave-packets with equal widths and equal amplitudes.\\label{INT2DGDP}}\n\\end{figure}\nA sequence of density plots (3 slices of a 3D-plot) of the intensity $R^2$ for equal widths and equal amplitudes is shown in Fig. \\ref{INT2DGDP}. The plot ranges are $x=-4\\times 10^{-6}$ to $4\\times 10^{-6}$ m, $z=-4\\times 10^{-6}$ to $4\\times 10^{-6}$ m and $t=0$ to $1.5\\times 10^{-9}$ s. The first $xz$-slice shows the high intensity emerging from the two pinholes, the middle $xz$-slice shows the beginning of the formation of interference as the Gaussian packets begin to overlap, while the final $xz$-slice shows a fully formed interference pattern.\n\\begin{figure}[h]\n\\unitlength=1in\n\\hspace*{1.1in}\\includegraphics[width=4in,height=5.33in] {figure8.jpg} \n\\caption{The quantum potential in a two-pinhole interference experiment with equal widths and equal amplitudes modeled by two-dimensional Gaussian wave-packets with equal widths and equal amplitudes.\\label{QPAnim}}\n\\end{figure}\n\nThe animation sequence for the quantum potential $Q$ for equal widths and equal amplitudes is shown in Fig. \\ref{QPAnim}. The animation ranges are $x=-3.5\\times 10^{-6}$ to $3.5\\times 10^{-6}$ m, $z=-3.5\\times 10^{-6}$ to $3.5\\times 10^{-6}$ m, $Q=-1\\times 10^{-23}$ to $1\\times 10^{-23}$ J and $t=0$ to $2.6923\\times 10^{-9}$ s. This time we used the same pinhole to detecting screen separation, 0.35 m, (corresponding to a time of flight of $t=2.6923\\times10^{-9}$ s) as J\\\"{o}nsson, since this resulted in a clear plateau-valley formation in the final frame. The first frame shows that the quantum potential is restricted to the width of the two pinholes. The second frame shows the beginning of the formation of quantum potential plateaus and valleys corresponding to the beginning of the overlap of the Gaussian wave packets. Subsequent frames show the continued widening of the plateaus and the deepening of the valleys. The final frame, as mentioned, shows clear plateau and valley formation. \nThe gradient of the quantum potential gives rise to a quantum force. Where the gradient is zero, as on the flat plateaus, the quantum force is zero and electrons progress along their trajectory to a bright fringe on the detecting screen unhindered. At the edges of the plateaus the quantum potential slopes steeply down to the valleys. The steep gradient of these slopes gives rise to a large quantum force that pushes particles with trajectories along these slopes to adjacent plateaus, after which they proceed unhindered to a bright fringe on the detecting screen. In this way, the quantum potential guides the electrons to the bright fringes and prevents electrons reaching the dark fringes. Note though, as mentioned earlier, that since the $R$ and $S$-fields codetermine each other, the $S$-field can also be said to guide the electrons to the bright fringes. \n\n\\begin{figure}[h]\n\\unitlength=1in \n\\hspace*{1.0in}\\includegraphics[width=4in,height=3in] {figure9.jpg} \n\\caption{A sequence of density plots (3 slices of a 3D-plot) of the intensity $R^2$ in a two-pinhole interference experiment with equal widths and equal amplitudes modeled by two-dimensional Gaussian wave-packets with equal widths and equal amplitudes.\\label{T2DGQPDP}}\n\\end{figure}\nA sequence of density plots (3 slices of a 3D-plot) of the quantum potential for equal widths and equal amplitudes is shown in Fig. \\ref{T2DGQPDP}. The plot ranges are $x=-3\\times 10^{-6}$ to $3\\times 10^{-6}$ m, $z=-3\\times 10^{-6}$ to $3\\times 10^{-6}$ m and $t=0$ to $1.5\\times 10^{-9}$ s. When producing the density animation we found that part of the image in the $t=0$ s frame was missing, hence, we have left out this frame, beginning instead with the $t=3\\times 10^{-10}$ s frame. The reason for the missing image is not clear, but is most likely due to the density plotting algorithm not handling difficult numbers very well, unlike the 3-D plotting algorithm. The first slice shows the beginning of the overlap of the Gaussian packets and the beginning of plateau and valley formation. The middle slice shows the more developed plateaus and valleys, while the final slice shows distinct plateaus and valleys. The wide bright blue bands indicate the quantum potential plateaus where the quantum force is zero. The narrower dark bands show the quantum potential sloping down to the valleys, slopes were electrons experience a strong quantum force. The darker the bands the steeper the quantum potential slopes.\n\n\n\\begin{figure}[h]\n\\unitlength=1in\n\\hspace*{0.8in}\\includegraphics[width=4.5in,height=2.36in] {figure10.jpg} \n\\caption{The trajectories in a two-pinhole interference experiment with equal widths and equal amplitudes modeled by two-dimensional Gaussian wave-packets with equal widths and equal amplitudes.\\label{TRAJEWEA}}\n\\end{figure}\nThe trajectories for equal widths and equal amplitudes is shown in Fig. \\ref{TRAJEWEA}. The trajectory ranges are $x=-3.5\\times 10^{-6}$ to $3.5\\times 10^{-6}$ m, $z=-3.5\\times 10^{-6}$ to $3.5\\times 10^{-6}$ m and $t=0$ to $1.5\\times 10^{-9}$ s. Though the axes are labeled at the edges, the plots correspond to axes with their origin placed centrally between the pinholes. We have chosen to label the axis at the edges of the plot frame in order to show the trajectories clearly. In a real experiment, the initial position of the electrons can lie anywhere within the pinholes. But, to clearly show the behaviour of the trajectories we have chosen square initial positions within each pinhole. It is clear, that interference occurs only along the $x$-direction; there is no interference along the $z$-direction. We also see clearly how the quantum potential (and $S$-field)\nguides the electron trajectories to the bright fringes. Electrons whose trajectories lie within the quantum potential plateaus, therefore experiencing no quantum force, move along straight trajectories to the bright fringes. Electrons whose trajectories lie along the quantum potential slopes are pushed by the quantum force to an adjacent plateau, thereafter proceeding along straight trajectories to the bright fringes.\n\n\\subsection{Computer plots for equal pinhole widths and unequal amplitudes}\n\n\\begin{figure}[h]\n\\unitlength=1in \n\\hspace*{1.1in}\\includegraphics[width=4in,height=5.33in] {figure11.jpg} \n\\caption{The intensity in a two-pinhole interference experiment with equal widths but unequal amplitudes modeled by two-dimensional Gaussian wave-packets with equal widths but unequal amplitudes.\\label{INT2DGUA}}\n\\end{figure}\nThe animation sequence for the intensity $R^2$ for equal widths and unequal amplitudes (EWUA) is shown in Fig. \\ref{INT2DGUA}. The animation ranges are $x=-3.5\\times 10^{-6}$ to $3.5\\times 10^{-6}$ m, $z=-3.5\\times 10^{-6}$ to $3.5\\times 10^{-6}$ m, $R^2 = 0$ to $1.8\\times 10^{-11}$ Jm$^{-2}$s$^{-1}$ and $t=0$ to $1.5\\times 10^{-9}$ s. As indicated in Table 3.1, for the case EWUA the angle $b=\\frac{\\pi}{3}$. This results in an increase in the intensity through the $+x_0$-pinhole from $\\frac{1}{2}$ to $\\frac{3}{4}$ and a reduction in the intensity at the $-x_0$-pinhole from $\\frac{1}{2}$ to $\\frac{1}{4}$. From Fig. \\ref{INT2DGUA}, we see that as the Gaussian wave-packets begin to combine to form a single peak envelope with interference fringes beginning to form, the intensity peak is shifted toward the larger intensity $+x_0$-pinhole. This shift becomes less pronounced, almost disappearing, as the interference fringes become more distinct as in the last $t=0$ to $1.5\\times 10^{-9}$ s frame. Comparing the $t=1.5\\times10^{-9}$ frame for EWUA with the corresponding intensity frame for EWEA we can see visually that fringe visibility is reduced. We can confirm this visual observation by calculating the visibility of the central fringe:\n\\[\nV_{EWUA}=\\frac{I_{max}-I_{min}}{I_{max}+I_{min}}=\\frac{8\\times 10^{10}-2\\times 1^{10}}{8\\times 10^{10}+2\\times 1^{10}}=0.6\n\\]\nClearly, the visibility is lower for the EWUA case.\n\n\n\\begin{figure}[h]\n\\unitlength=1in \n\\hspace*{1.0in}\\includegraphics[width=4in,height=3in] {figure12.jpg} \n\\caption{A sequence of density plots (3 slices of a 3D-plot) of the intensity $R^2$ in a two-pinhole interference experiment with equal widths but unequal amplitudes modeled by two-dimensional Gaussian wave-packets with equal widths but unequal amplitudes.\\label{INT2DGDPUA}}\n\\end{figure}\nA sequence of density plots (3 slices of a 3D-plot) of the intensity $R^2$ for equal widths and unequal amplitudes is shown in Fig. \\ref{INT2DGDPUA}. The plot ranges are $x=-4\\times 10^{-6}$ to $4\\times 10^{-6}$ m, $z=-4\\times 10^{-6}$ to $4\\times 10^{-6}$ m and $t=0$ to $1.5\\times 10^{-9}$ s. Again, comparing with EWUA case, we see that the intensity is reduced by noticing that the dark bands are not as distinct as for the case EWEA.\n\n\\begin{figure}[h]\n\\unitlength=1in\n\\hspace*{1.1in}\\includegraphics[width=4in,height=5.33in] {figure13.jpg} \n\\caption{The quantum potential in a two-pinhole interference experiment with equal widths but unequal amplitudes modeled by two-dimensional Gaussian wave-packets with equal widths but unequal amplitudes.\\label{QPAnimUA}}\n\\end{figure}\nThe animation sequence for the quantum potential $Q$ for equal widths and unequal amplitudes is shown in Fig. \\ref{QPAnimUA}. The animation ranges are $x=-3.5\\times 10^{-6}$ to $3.5\\times 10^{-6}$ m, $z=-3.5\\times 10^{-6}$ to $3.5\\times 10^{-6}$ m, $Q=-2\\times 10^{-25}$ to $4\\times 10^{-25}$ J and $t=0$ to $1.5\\times 10^{-9}$ s. The quantum potential in the early frames, perhaps unexpectedly, peaks on the side of the lower intensity $-x_0$-pinhole. This behaviour is most pronounced in frames two and three. As the peaks and valleys become more pronounced, the envelope peak spreads and flattens as shown in frames 5 and 6. However, the valleys on the side of the lower intensity $-x_0$-pinhole are deeper. Correspondingly, the gradient of quantum potential sloping down to the deeper valleys is greater giving rise to a stronger quantum force. This results in the formation of more distinct fringes on the side of the lower intensity pinhole.This feature is hardly visible in either the intensity animation frames, Fig. \\ref{INT2DGUA}, or in the intensity density plots, Fig. \\ref{INT2DGDPUA}. However, as we shall see below, the trajectory plots, Fig. \\ref{TRAJEWUA}, shows this feature more clearly.\n\n\n\\begin{figure}[h]\n\\unitlength=1in \n\\hspace*{1.0in}\\includegraphics[width=4in,height=3in] {figure14.jpg} \n\\caption{A sequence of density plots (3 slices of a 3D-plot) of the intensity $R^2$ in a two-pinhole interference experiment with equal widths but unequal amplitudes modeled by two-dimensional Gaussian wave-packets with equal widths but unequal amplitudes.\\label{T2DGQPDPUA}}\n\\end{figure}\nA sequence of density plots (3 slices of a 3D-plot) of the quantum potential for equal widths and unequal amplitudes is shown in Fig. \\ref{T2DGQPDPUA}. The plot ranges are $x=-3\\times 10^{-6}$ to $3\\times 10^{-6}$ m, $z=-3\\times 10^{-6}$ to $3\\times 10^{-6}$ m and $t=0$ to $1.5\\times 10^{-9}$ s. As for the EWEA case, the first $t=0$ s slice is not shown, this time, because the image it is badly formed. Instead, we begin with the $t=3\\times 10^{-9}$ s slice. This slice clearly shows that the deeper valleys, indicated by blacker bands, are on the side of lower intensity $-x_0$-pinhole. As above, the bright blue bands represent regions where the quantum potential gradient is either zero or very small, giving rise to either a zero or small quantum force. In the final $t=1.5\\times 10^{-9}$ s slice, the peaks and valleys even out though a slight bias to deeper valleys on the $-x_0$ side is still discernible.\n\n\\begin{figure}[h]\n\\unitlength=1in\n\\hspace*{0.8in}\\includegraphics[width=4.5in,height=2.36in] {figure15.jpg} \n\\caption{The trajectories in a two-pinhole interference experiment with equal widths but unequal amplitudes modeled by two-dimensional Gaussian wave-packets with equal widths but unequal amplitudes.\\label{TRAJEWUA}}\n\\end{figure}\nThe trajectories for equal widths and unequal amplitudes are shown in Fig. \\ref{TRAJEWUA}. The trajectory ranges are $x=-4.5\\times 10^{-6}$ to $4.5\\times 10^{-6}$ m, $z=-4.5\\times 10^{-6}$ to $4.5\\times 10^{-6}$ m and $t=0$ to $1.5\\times 10^{-9}$ s. We notice that some electron trajectories reach what were the dark regions for the case of EWEA. This indicates the reduction of fringe visibility that we saw above in the intensity plots for this case. We also notice that this reduced intensity is less pronounced on the side of the lower intensity $-x_0$-pinhole, so that interference fringes on this side are more distinct, a feature we noted above for the quantum potential for this case. The overall reduction in visibility is clear to see.\n\n\\subsection{Computer plots for unequal pinhole widths and equal amplitudes}\n\\begin{figure}[h]\n\\unitlength=1in \n\\hspace*{1.1in}\\includegraphics[width=4in,height=5.33in] {figure16.jpg} \n\\caption{The intensity in a two-pinhole interference experiment with unequal widths but equal amplitudes modeled by two-dimensional Gaussian wave-packets with unequal widths but equal amplitudes.\\label{INT2DGUW}}\n\\end{figure}\nThe animation sequence for the intensity $R^2$ for unequal widths and equal amplitudes is shown in Fig. \\ref{INT2DGUW}. The animation ranges are $x=-3.5\\times 10^{-6}$ to $3.5\\times 10^{-6}$ m, $z=-3.5\\times 10^{-6}$ to $3.5\\times 10^{-6}$ m, $R^2 = 0$ to $1.8\\times 10^{11}$ Jm$^{-2}$s$^{-1}$ and $t=0$ to $1.5\\times 10^{-9}$ s. From Table 3.1, we note that the width of the $+x_0$ Gaussian wave-packet is twice that of the $-x_0$ Gaussian wave-packet. The first frame clearly shows the narrower $-x_0$ wave-packet. A narrower wave-packet spreads more rapidly than a wider packet, as seen in the second frame. The more rapid spread of the narrower wave-packet results in the wave-packets beginning to overlap on $+x$-side, as is shown in the second frame. As the wave-packets spread, the interference pattern becomes ever more distinct. Though becoming a little more symmetrical about $x=0$, the fringe pattern is shifted toward the $+x$-side, with the fringes on the $+x$-side being slightly more pronounced. This feature is seen more clearly in the intensity density plots, which we will describe next. The visibility of the central fringe for this case is:\n\\[\nV_{UWEA}=\\frac{I_{max}-I_{min}}{I_{max}+I_{min}}=\\frac{1.6\\times 10^{11}-0.1\\times 1^{11}}{1.6\\times 10^{11}+0.1\\times 10^{11}}=0.88.\n\\]\nWe see that the visibility is less than in the EWEA case, but greater than in the EWUA case.\n\n\\begin{figure}[h]\n\\unitlength=1in \n\\hspace*{1.0in}\\includegraphics[width=4in,height=3in] {figure17.jpg} \n\\caption{A sequence of density plots (3 slices of a 3D-plot) of the intensity $R^2$ in a two-pinhole interference experiment with unequal widths and equal amplitudes modeled by two-dimensional Gaussian wave-packets with unequal widths but equal amplitudes.\\label{INT2DGDPUW}}\n\\end{figure}\nA sequence of density plots (3 slices of a 3D-plot) of the intensity $R^2$ for unequal widths and equal amplitudes is shown in Fig. \\ref{INT2DGDPUW}. The plot ranges are $x=-4\\times 10^{-6}$ to $4\\times 10^{-6}$ m, $z=-4\\times 10^{-6}$ to $4\\times 10^{-6}$ m and $t=0$ to $1.5\\times 10^{-9}$ s. Again, the first slice clearly shows the difference in the size of the pinholes, while the second slice shows interference fringes beginning to form on the $+x$-side. The final slice shows that the interference pattern develops into a more symmetric form, though still shifted more to the $+x$-side and slightly more pronounced on this side. The dark bands are less distinct than in the EWEA case, reflecting the reduction in fringe visibility for this case. \n\n\n\\begin{figure}[h]\n\\unitlength=1in\n\\hspace*{1.1in}\\includegraphics[width=4in,height=5.33in] {figure18.jpg} \n\\caption{The quantum potential in a two-pinhole interference experiment with unequal widths but equal amplitudes modeled by two-dimensional Gaussian wave-packets with unequal widths but equal amplitudes.\\label{QPAnimUW}}\n\\end{figure}\nThe animation sequence for the quantum potential $Q$ for unequal widths and equal amplitudes is shown in Fig. \\ref{QPAnimUW}. The animation ranges are $x=-3.5\\times 10^{-6}$ to $3.5\\times 10^{-6}$ m, $z=-3.5\\times 10^{-6}$ to $3.5\\times 10^{-6}$ m, $Q=-2\\times 10^{-25}$ to $4\\times 10^{-25}$ J and $t=0$ to $1.5\\times 10^{-9}$ s. As for the interference animation, the first frame shows the difference in size of the pinholes, while the second frame shows the more rapid spread of the narrower wave-packet. The overlap of the wave-packets, as for the intensity, begins on the $+x$-side, as does the formation of plateaus and valleys. Subsequent frames show the skewed formation to the $+x$-side of plateaus and valleys. The shift to the $+x$-side is maintained even in the last frame, even though the pattern looks more symmetrical. This can be seen by noticing that in the last frame there are three quantum potential peaks on the $+x$-side, compared to two peaks on the $-x$-side.\n\n\n\\begin{figure}[h]\n\\unitlength=1in \n\\hspace*{1.0in}\\includegraphics[width=4in,height=3in] {figure19.jpg} \n\\caption{A sequence of density plots (3 slices of a 3D-plot) of the quantum potential in a two-pinhole interference experiment with unequal widths but equal amplitudes modeled by two-dimensional Gaussian wave-packets with unequal widths but equal amplitudes.\\label{T2DGQPDPUW}}\n\\end{figure}\nA sequence of density plots (3 slices of a 3D-plot) of the quantum potential for unequal widths and equal amplitudes is shown in Fig. \\ref{T2DGQPDPUW}. The plot ranges are $x=-3\\times 10^{-6}$ to $3\\times 10^{-6}$ m, $z=-3\\times 10^{-6}$ to $3\\times 10^{-6}$ m and $t=0$ to $1.5\\times 10^{-9}$ s. As with the quantum potential density plots for the EWUA case, the image in the first slice is problematic. This time, the image is complete but it does not reflect the two-pinhole structure. As before, this is probably because the density plotting algorithm does not handle difficult numbers very well. The middle slice shows the early formation of plateaus and valleys skewed to the $+x$-side, with the plateau regions narrower than the valley regions. In the final slice, the plateau regions become wider than the valley regions, but are less distinct as compared to the EWEA case, or even as compared to the EWUA case. Despite this, the fringe visibility is greater than for the EWUA case, though of course, less than for EWEA case, as we saw above. Again, the shift of quantum potential peaks to the $+x$-side can be seen.\n\n\\begin{figure}[h]\n\\unitlength=1in\n\\hspace*{0.8in}\\includegraphics[width=4.5in,height=2.36in] {figure20.jpg} \n\\caption{The trajectories in a two-pinhole interference experiment with unequal widths but equal amplitudes modeled by two-dimensional Gaussian wave-packets with unequal widths but equal amplitudes.\\label{TRAJEWEAUW}}\n\\end{figure}\nThe trajectories for unequal widths and equal amplitudes is shown in Fig. \\ref{TRAJEWEAUW}. The trajectory ranges are $x=-4\\times 10^{-6}$ to $4\\times 10^{-6}$ m, $z=-4\\times 10^{-6}$ to $4\\times 10^{-6}$ m and $t=0$ to $1.5\\times 10^{-9}$ s. The electron trajectories clearly show the rapid spread of the narrower $-x_0$-Gaussian wave-packet. It can also be seen that electron trajectories from the $-x_0$-pinhole are more evenly spread on the detecting screen parallel to the $x$-axis than for the EWEA case, indicating a reduced interference pattern. The electron trajectories from the $+x_0$-pinhole, spread much less. Though only one bright and one dark fringe is shown on the $+x$-side, they appear more distinct than on the $-x$-side, reflecting the shift of the interference fringes to the $+x$-side\n\\section{Conclusion}\nWe have seen that the behaviour of the intensity, quantum potential and electron trajectories is similar to that for the two-slit experiment modeled by one-dimensional Gaussian wave-packets. In particular, the distinctive kinked behaviour of the electron trajectories is seen in directions parallel to the $x$-axis. We saw in addition, as could possibly be guessed at the outset, that there is no interference in the vertical $z$-direction. We also saw the expected reduction in interference for the cases of unequal amplitudes and unequal widths.The reduction in interference is interpreted, as in the classical case, in terms of wave profiles with reduced coherence. This is a far more intuitive explanation for the reduction in fringe visibility (and in my view more appealing) than the common interpretation based on the Wootters-Zureck version of complementarity \\cite{WZ79}, where the reduction of interference for the cases of unequal amplitudes and unequal widths is attributed to partial particle behaviour and partial wave-behaviour. The partial particle behaviour is attributed to the increase in knowledge of the electrons path in the sense that an electron it is more likely to pass through the larger pinhole or the pinhole with the larger intensity. In references \\cite{K92} and \\cite{K2016} we argued that the Wootters-Zureck version of complementarity as commonly interpreted actually contradicts Bohr's principle of complementary. In reference \\cite{K2016} we also indicated that by reference to two future, mutually exclusive experimental arrangements, an interpretation of the Wootters-Zureck version of complementarity consistent with Bohr's principle of complementarity can be achieved.\n\nUsing the weak measurement protocol introduced by Aharanov, Albert and Vaidman (see reference \\cite{K2017} for a brief overview and further references), Kocsis et al reproduced experimentally Bohm's trajectories in a two-slit interference experiment \\cite{KOCSIS2011}. We might guess that it would not be difficult to modify the experiment slightly to reproduce the electron trajectories calculated here for the case of unequal widths and the case of unequal amplitudes. \n\n\\include{Kaloyerou_Ilunga_2Slit_2DG_Bib_JPA}\n\n\\end{document}\n\nxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\n\n\\begin{figure}[!t]\n \n \\hspace{0mm}\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=0.8\\linewidth]{latex\/figures\/overview5.png}\n \n \n \\end{subfigure}\n \\hfill\n \\vspace{5mm}\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\linewidth]{latex\/figures\/results.png}\n \n \n \\end{subfigure}\n \\hfill\n \\caption{{\\bf Model and performance overview. (top)} TSViT architecture. A more detailed schematic is presented in Fig.\\ref{fig:TSViT_submodules}. {\\bf (bottom) TSViT performance} compared with previous arts (Table \\ref{tab:results}).}\n \\label{fig:overview}\n\\end{figure}\n\n\nThe monitoring of the Earth surface man-made impacts or activities is essential to enable the design of effective interventions to increase welfare and resilience of societies. One example is the sector of agriculture in which monitoring of crop development can help design optimum strategies aimed at improving the welfare of farmers and resilience of the food production system. \nThe second of United Nations Sustainable Development Goals (SDG) of Ending Hunger relies on increasing the crop productivity and revenues of farmers in poor and developing countries \\cite{ungoals} - approximately 2.5 billion people's livelihoods depend mainly on producing crops \\cite{Conway2012}. Achieving SDG 2 goals requires to be able to accurately monitor yields and the evolution of cultivated areas in order to measure the progress towards achieving several goals, as well as to evaluate the effectiveness of different policies or interventions. \nIn the European Union (EU) the Sentinel for Common Agricultural Policy program (Sen4CAP) \\cite{sen4cap} focuses on developing tools and analytics to support the verification of direct payments to farmers with underlying environmental conditionalities such as the adoption of environmentally-friendly \\cite{practices} and crop diversification \\cite{diversification} practices based on real-time monitoring by the European Space Agency's (ESA) Sentinel high-resolution satellite constellation \\cite{EUa} to complement on site verification. \nRecently, the volume and diversity of space-borne Earth Observation (EO) data \\cite{50yearsEO} and post-processing tools \\cite{googleearth,deepsatdata,TrainingDML-AI} has increased exponentially. This wealth of resources, in combination with important developments in machine learning for computer vision \\cite{alexnet, resnet, fcn}, provides an important opportunity for the development of tools for the automated monitoring of crop development. \n\nTowards more accurate automatic crop type recognition, we introduce TSViT, the first fully-attentional\\footnote{without any convolution operations} architecture for general SITS processing. An overview of the proposed architecture can be seen in Fig.\\ref{fig:overview} (top). Our novel design introduces some inductive biases that make TSViT particularly suitable for the target domain:\n\\begin{itemize}\n \\item Satellite imagery for monitoring land surface variability boast a high revisit time leading to long temporal sequences, for example Sentinel-2 ({\\it S2}) satellites have an average revisit time of 5 days resulting in 60-70 acquisitions per year. To reduce the amount of computation we factorize input dimensions into their temporal and spatial components, providing intuition (section \\ref{sec:tsvit_encoder}) and experimental evidence (section \\ref{sec:ablation}) about why the order of factorization matters. \n \\item TSViT uses a Transformer backbone \\cite{aiayn} following the recently proposed ViT framework \\cite{vit}. As a result, every TSViT layer has a global receptive field in time or space, in contrast to previously proposed convolutional and recurrent architectures \\cite{Ruwurm2,Rustowicz2019SemanticSO,garnot_iccv,duplo,tempcnn}.\n \\item To make our approach more suitable for SITS modelling we propose a tokenization scheme for the input image timeseries and propose acquisition-time-specific temporal position encodings in order to extract date-aware features and to account for irregularities in SITS acquisition times (section \\ref{sec:position_encodings}). \n \\item We make modifications to the ViT framework (section \\ref{sec:tsvit_backbone}) to enhance its capacity to gather class-specific evidence which we argue suits the problem at hand and design two custom decoder heads to accommodate both global and dense predictions (section \\ref{sec:tsvit_decoder}).\n\\end{itemize}\nOur provided intuitions are tested through extensive ablation studies on design parameters presented in section \\ref{sec:ablation}. \nOverall, our architecture achieves state-of-the-art performance in three publicly available datasets for classification and semantic segmentation presented in Table \\ref{tab:results} and Fig.\\ref{fig:overview}.\n\n\n\n\n\n\n \n\\section{Related work}\n\\label{sec:related_work}\n\n\\subsection{Crop type recognition}\nCrop type recognition is a subcategory of land use recognition which involves assigning one of $K$ crop categories (classes) at a set of desired locations on a geospatial grid. For successfully doing so modelling the temporal patterns of growth during a time period of interest has been shown to be critical \\cite{temp_var,spacetime}. As a result, model inputs are timeseries of $T$ satellite images of spatial dimensions $H \\times W$ with $C$ channels, $\\mathbf{X} \\in \\mathbb{R}^{T \\times H\\times W \\times C}$ rather than single acquisitions. \nThere has been a significant body of work on crop type identification found in the remote sensing literature \\cite{cc1, ndvi1, ndvi2, ndvi3, hmm, pel_rand}. These works typically involve multiple processing steps and domain expertise to guide the extraction of features, e.g. NDVI \\cite{dvi}, that can be separated into crop types by learnt classifiers. More recently, Deep Neural Networks (DNN) trained on raw optical data \\cite{Ruwurm1,Ru_wurm_2018,dnn2,dnn3,dnn4,emb_earth} have been shown to outperform these approaches.\nAt the object level, (SITS classification) \\cite{tempcnn, duplo, transformer_sat} use 1D data of single-pixel or parcel-level aggregated feature timeseries, rather than the full SITS record, learning a mapping $f: \\mathbb{R}^{T \\times C} \\rightarrow \\mathbb{R}^{K}$. Among these works, TempCNN \\cite{tempcnn} employs a simple 1D convolutional architecture, while \\cite{transformer_sat} use the Transformer architecture \\cite{aiayn}. DuPLo \\cite{duplo} consists of an ensemble of CNN and RNN streams in an effort to exploit the complementarity of extracted features. Finally, \\cite{garnot2019satellite} view satellite images as un-ordered sets of pixels and calculate feature statistics at the parcel level, but, in contrast to previously mentioned approaches, their implementation requires knowledge of the object geometry.\nAt the pixel level (SITS semantic segmentation), models learn a mapping $f(\\mathbf{X}) \\in \\mathbb{R}^{H \\times W \\times K}$. For this task, \\cite{Ru_wurm_2018} show that convolutional RNN variants (CLSTM, CGRU) \\cite{conv_lstm} can automatically extract useful features from raw optical data, including cloudy images, that can be linearly separated into classes. The use of CNN architectures is explored in \\cite{Rustowicz2019SemanticSO} who employ two models: a UNET2D feature extractor, followed by a CLSTM temporal model (UNET2D-CLSTM); and a UNET3D fully-convolutional model. Both are found to achieve equivalent performances. In a similar spirit, \\cite{fpn_clstm} use a FPN \\cite{fpn} feature extractor, coupled with a CLSTM temporal model (FPN-CLSTM). The UNET3Df architecture \\cite{cscl} follows from UNET3D but uses a different decoder head more suited to contrastive learning. The U-TAE architecture \\cite{garnot_iccv} follows a different approach, in that it employs the encoder part of a UNET2D, applied on parallel on all images, and a subsequent temporal attention mechanism which collapses the temporal feature dimension. These spatial-only features are further processed by the decoder part of a UNET2D to obtain dense predictions. \n\n \n\\subsection{Self-attention in vision}\n\nConvolutional \\cite{alexnet,vgg,resnet} and fully-convolutional networks (FCN) \\cite{overfeat,fcn} have been the de-facto model of choice for vision tasks over the past decade. The convolution operation extracts translation-equivariant features via application of a small square kernel over the spatial extent of the learnt representation and grows the feature receptive field linearly over the depth of the network. In contrast, the self-attention operation, introduced as the main building block of the Transformer architecture \\cite{aiayn}, uses self-similarity as a means for feature aggregation and can have a global receptive field at every layer. Following the adoption of Transformers as the dominant architecture in natural language processing tasks \\cite{aiayn,bert,gpt3}, several works have attempted to exploit self-attention in vision architectures. Because the time complexity of self-attention scales quadratically with the size of the input, its naive implementation on image data, which typically contain more pixels than text segments contain words, would be prohibitive. To bypass this issue, early works focused on improving efficiency by injecting self-attention layers only at specific locations within a CNN \\cite{nonlocal,attn_augm_cnn} or by constraining their receptive field to a local region \\cite{image_transformer,stand_sa,axial_deeplab}, however, in practice, these designs do not scale well with available hardware leading to slow throughput rates, large memory requirements and long training times.\nFollowing a different approach, the Vision Transformer (ViT) \\cite{vit}, presented in further detail in section \\ref{sec:vit_backbone}, constitutes an effort to apply a pure Transformer architecture on image data, by proposing a simple, yet efficient image tokenization strategy. \nSeveral works have drawn inspiration from ViT to develop novel attention-based architectures for vision. For image recognition, \\cite{tokens2token,swin_transformer} re-introduce some of the inductive biases that made CNNs successful in vision, leading to improved performances without the need for long pre-training schedules, \\cite{dense_vit,segmenter} employ Transformers for dense prediction, \\cite{detr, vit_yolo, song2022vidt} for object detection and \\cite{video_transformer,video_instance_vit,vivit} for video processing. \nAmong these works, our framework is more closely related to \\cite{vivit} who also use a spatio-temporal factorization of input dimensions, and \\cite{segmenter} who use multiple learnable tokens for semantic segmentation. However, we deviate significantly from \\cite{vivit} by introducing acquisition-time-specific temporal encodings to accommodate an uneven distribution of images in time, reverse the order of factorization and are interested in both global and dense predictions (section \\ref{sec:tsvit_encoder}). Additionally, we differ from \\cite{segmenter} in that we introduce the {\\it cls} tokens as an input to the encoder in order to collapse the time dimension and obtain class-specific features, while they use them as class queries inputs to the decoder similar to their use in \\cite{detr}. We also differ significantly from \\cite{segmenter} in terms of the decoder design as they resize the output of the penultimate layer to match the input size and further process that to obtain pixel-level logits, while we decode each token directly into a region matching input patch dimensions and reassemble these into a dense probability map (section \\ref{sec:tsvit_decoder}).\n\n\n\\section{Method}\nIn this section we present the TSViT architecture in detail. First, we give a brief overview of the ViT (section \\ref{sec:vit_backbone}) which provided inspiration for this work. In section \\ref{sec:tsvit_backbone} we present our modified TSViT backbone, followed by our tokenization scheme (section \\ref{sec:tokenization}), encoder (section \\ref{sec:tsvit_encoder}) and decoder (section \\ref{sec:tsvit_decoder}) modules. Finally, in section \\ref{sec:position_encodings}, we discuss several considerations behind the design of our position encoding scheme. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\linewidth]{latex\/figures\/transformer6.png}\n \\caption{{\\bf Backbone architectures. (a)} Transformer backbone, {\\bf (b)} ViT architecture, {\\bf (c)} TSViT backbone employs additional {\\it cls} tokens (red), each responsible for predicting a single class. }\n \\label{fig:vit}\n\\end{figure}\n\n\\subsection{Primer on ViT} \\label{sec:vit_backbone}\nInspired by the success of Transformers in natural language processing tasks \\cite{aiayn} the ViT \\cite{vit} is an application of the Transformer architecture to images with the fewest possible modifications. Their framework involves the tokenization of a 2D image $\\mathbf{X} \\in \\mathbb{R}^{H\\times W \\times C}$ to a set of {\\it patch} tokens $\\mathbf{Z} \\in \\mathbb{R}^{N \\times d}$ by splitting it into a sequence of $N=\\lfloor\\frac{H}{h}\\rfloor \\lfloor\\frac{W}{w}\\rfloor$ same-size and non-overlapping patches of spatial extent $(h \\times w )$ which are flattened into 1D tokens $\\mathbf{x_i} \\in \\mathbb{R}^{hwC}$ and linearly projected into $d$ dimensions. Overall, the process of token extraction is equivalent to the application of 2D convolution with kernel size $(h \\times w )$ at strides $(h, w)$ across respective dimensions. The extracted patches are used to construct model inputs as follows:\n\n\\begin{equation}\\label{eq:vit_inputs}\n \\mathbf{Z^0} = concat(\\mathbf{z_{cls}}, \\mathbf{Z} + \\mathbf{P}) \\in \\mathbb{R}^{N+1\\times d}\n\\end{equation}\n\nA set of learned positional encoding vectors $\\mathbf{P} \\in \\mathbb{R}^{N\\times d}$, added to $\\mathbf{Z}$, are employed to encode the absolute position information of each token and break the permutation invariance property of the subsequent Transformer layers. A separate learned class ({\\it cls}) token $\\mathbf{z_{cls}} \\in \\mathbb{R}^{d}$ \\cite{bert} is prepended to the linearly transformed and positionally augmented {\\it patch} tokens leading to a length $N+1$ sequence of tokens $\\mathbf{Z^0}$ which are used as model inputs. The Transformer backbone consists of $L$ blocks of alternating layers of Multiheaded Self-Attention (MSA) \\cite{aiayn} and residual Multi-Layer Perceptron (MLP) (Fig.\\ref{fig:vit}(a)). \n\n\\begin{equation}\\label{block1}\n \\mathbf{Y^l} = MSA(LN(\\mathbf{Z^l})) + \\mathbf{Z^l}\n\\end{equation}\n\n\\begin{equation}\\label{block2}\n \\mathbf{Z^{l+1}} = MLP(LN(\\mathbf{Y^l})) + \\mathbf{Y^l}\n\\end{equation}\n\nPrior to each layer, inputs are normalized following Layernorm (LN) \\cite{ln}. MLP blocks consist of two layers of linear projection followed by GELU non-linear activations \\cite{gelu}. In contrast to CNN architectures, in which spatial dimensions are reduced while feature dimensions increase with layer depth, Transformers are isotropic in that all feature maps $\\mathbf{Z}^l \\in \\mathbb{R}^{1+N\\times d}$ have the same shape throughout the network.\nAfter processing by the final layer L, all tokens apart from the first one (the state of the {\\it cls} token) are discarded and unormalized class probabilities are calculated by processing this token via a MLP. A schematic representation of the ViT architecture can be seen in Fig.\\ref{fig:vit}(b).\n\n\n\\begin{figure}[!t]\n \\centering\n \\includegraphics[width=0.9\\linewidth]{latex\/figures\/tokenization2.png}\n \\caption{{\\bf SITS Tokenization}. We embed each satellite image independently following ViT \\cite{vit}}\n \\label{fig:tokenization}\n\\end{figure}\n\n\\subsection{Backbone architecture}\\label{sec:tsvit_backbone}\nIn the ViT architecture, the {\\it cls} token progressively refines information gathered from all {\\it patch} tokens to reach a final global representation used to derive class probabilities. \nOur TSViT backbone, shown in Fig.\\ref{fig:vit}(c), essentially follows from ViT, with few modifications in the tokenization and decoder layers. More specifically, we introduce $K$ (equal to the number of object classes) additional learnable {\\it cls} tokens $\\mathbf{Z_{cls}} \\in \\mathbb{R}^{K \\times d}$, compared to ViT which uses a single token. \n\\begin{equation}\\label{eq:tsvit_inputs}\n \\mathbf{Z^0} = concat(\\mathbf{Z_{cls}}, \\mathbf{Z} + \\mathbf{P}) \\in \\mathbb{R}^{N+K\\times d}\n\\end{equation}\nWithout deviating from ViT, all {\\it cls} and positionally augmented {\\it patch} tokens are concatenated and processed by the $L$ layers of a Transformer encoder. After the final layer, we discard all {\\it patch} tokens and project each {\\it cls} token into a scalar value. By concatenating these values we obtain a length $K$ vector of unormalised class probabilities. \nThis design choice brings the following two benefits: 1) it increases the capacity of the {\\it cls} token relative to the {\\it patch} tokens, allowing them to store more patterns to be used by the MSA operation; 2) it allows for more controlled handling of the interactions between evidence gathered for each class.\nRegarding the first point, introducing multiple {\\it cls} tokens can be seen as equivalent to increasing the dimension of a single {\\it cls} token to an integer multiple of the {\\it patch} token dimension $d_{cls} = k d_{patch}$ and split the {\\it cls} token into $k$ separate subspaces prior to the MSA operation. In this way we can increase the capacity of the {\\it cls} tokens while avoiding issues such as the need for asymmetric MSA weight matrices for {\\it cls} and {\\it patch} tokens, which would effectively more than double our model's parameter count. Furthermore, by choosing $k=K$ and enforcing a bijective mapping from {\\it cls} tokens to class predictions, the state of each {\\it cls} token becomes more focused to a specific class with network depth. In TSViT we go a step further and explicitly separate {\\it cls} tokens by class after processing with the temporal encoder to allow only same-{\\it cls}-token interactions in the spatial encoder. In section \\ref{sec:tsvit_encoder} we argue why this is a very useful inductive bias for modelling spatial relationships in crop type recognition.\n\n\n\\subsection{Tokenization of SITS inputs}\\label{sec:tokenization}\nA SITS record $\\mathbf{X} \\in \\mathbb{R}^{T \\times H\\times W \\times C}$ consists of a series of $T$ satellite images of spatial dimensions $H \\times W$ with $C$ channels. \nFor the tokenization of our 3D inputs, we can extend the tokenization-as-convolution approach to 3D data and apply a 3D kernel with size $(t \\times h \\times w)$ at stride $(t,h,w)$ across temporal and spatial dimensions. In this manner $N=\\lfloor\\frac{T}{t}\\rfloor \\lfloor\\frac{H}{h}\\rfloor \\lfloor\\frac{W}{w}\\rfloor$ non-overlapping tokens $\\mathbf{x_i} \\in \\mathbb{R}^{thwC}$ are extracted, and subsequently projected to $d$ dimensions. \nUsing $t>1$, all extracted tokens contain spatio-temporal information. For the special case of $t=1$ each token contains spatial-only information for each acquisition time and temporal information is accounted for only through the encoder layers. Since the computation cost of global self-attention layers is quadratic w.r.t. the length of the token sequence $\\mathcal{O}(N^2)$, choosing larger values for $t,h,w$ can lead to significantly reduced number of FLOPS. In our experiments, however, we have found small values for $t,h,w$ to work much better in practice. For all presented experiments we use a value of $t=1$ motivated in part because this choice simplifies the implementation of acquisition-time-specific temporal position encodings, described in section \\ref{sec:position_encodings}. With regards to the spatial dimensions of extracted patches we have found small values to work best for semantic segmentation, which is reasonable given that small patches retain additional spatial granularity. In the end, our tokenization scheme is similar to ViT's applied in parallel for each acquisition as shown in Fig.\\ref{fig:tokenization}, however, at this stage, instead of unrolling feature dimensions, we retain the spatial structure of the original input as reshape operations will be handled by the TSViT encoder submodules.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.999\\linewidth]{latex\/figures\/TSViT2.png}\n \\caption{{\\bf TSViT submodules. (a)} Temporal encoder. We reshape tokenized inputs, retaining the spatio-temporal structure of SITS, into a set of timeseries for each spatial location, add temporal position encodings $\\mathbf{P_T[t,:]}$ for acquisition times $\\mathbf{t}$, concatenate local {\\it cls} tokens $\\mathbf{Z_{Tcls}}$ (eq.\\ref{eq:temp_transf_input}) and process in parallel with a Transformer. Only the first $K$ output tokens are retained. {\\bf (b)} Spatial encoder. We reshape the outputs of the temporal encoder into a set of spatial feature maps for each {\\it cls} token, add spatial position encodings $\\mathbf{P_S}$, concatenate global {\\it cls} tokens $\\mathbf{Z_{Scls}}$ (eq.\\ref{eq:spatial_encoder_input}) and process in parallel with a Transformer. {\\bf (c)} Segmentation head. Each local {\\it cls} token is projected into $hw$ values denoting class-specific evidence for every pixel in a patch. All patches are then reassembled into the original image dimensions. {\\bf (d)} Classification head. Global {\\it cls} tokens are projected into scalar values, each denoting evidence for the presence of a specific class.}\n \\label{fig:TSViT_submodules}\n\\end{figure*}\n\\subsection{Encoder architecture}\\label{sec:tsvit_encoder}\nIn the previous section we presented a motivation for using small values $t,h,w$ for the extracted patches. Unless other measures are taken to reduce the model's computational cost this choice would be prohibitive for processing SITS with multiple acquisition times. To avoid such problems, we choose to factorize our inputs across their temporal and spatial dimensions, a practice commonly employed for video processing \\cite{spatial_temp1,spatial_temp2,spatial_temp3,spatial_temp4,spatial_temp5,spatial_temp6}. We note that all these works use a spatial-temporal factorization order, which is reasonable when dealing with natural images, given that it allows the extraction of higher level, semantically aware spatial features, whose relationship in time is useful for scene understanding. However, we argue that in the context of SITS, reversing the order of factorization is a meaningful design choice for the following reasons: \n1) in contrast to natural images in which context can be useful for recognising an object, in crop type recognition context can provide little information, or can be misleading. This arises from the fact that the shape of agricultural parcels, does not need to follow its intended use, i.e. most crops can generally be cultivated independent of a field's size or shape. Of course there exist variations in the shapes and sizes of agricultural fields \\cite{agri_land_patterns}, but these depend mostly on local agricultural practices and are not expected to generalize over unseen regions. Furthermore, agricultural parcels do not inherently contain sub-components or structure. Thus, knowing what is cultivated in a piece of land is not expected to provide information about what grows nearby. This is in contrast to other objects which clearly contain structure, e.g. in human face parsing there are clear expectations about the relative positions of various face parts. To test this hypothesis we enumerate over all agricultural parcels belonging to the most popular crop types in the T31TFM {\\it S2} tile in France for year 2018 and take crop-type-conditional pixel counts over a 1km square region from their centers. Then, we calculate the cosine similarity of these values with unconditional pixel counts over the extent of the T31TFM tile and find a high degree of similarity, suggesting that there are no significant variations between these distributions; \n2) a small region in SITS is far more informative than its equivalent in natural images, as it contains more channels than regular RGB images ({\\it S2} imagery contains 13 bands in total) whose intensities are averaged over a relatively large area (highest resolution of {\\it S2} images is $10 \\times 10$ m$^2$); \n3) SITS for land cover recognition do not typically contain moving objects. As a result, a timeseries of single pixel values can be used for extracting features that are informative of a specific object part found at that particular location. \nTherefore, several objects can be recognised using only information found in a single location; plants, for example, can be recognised by variations of their spectral signatures during their growth cycle. Many works performing crop classification do so using only temporal information in the form of timeseries of small patches \\cite{Ru_wurm_2018}, pixel statistics over the extent of parcels \\cite{Ruwurm1} or even values from single pixels \\cite{transformer_sat, tempcnn}. \nOn the other hand, the spatial patterns in a single image are uninformative of the crop type, as evidenced by the low performance of systems relying on single images \\cite{garnot_iccv}. Our encoder architecture can be seen in Fig.\\ref{fig:TSViT_submodules}(a,b). We now describe the temporal and spatial encoder submodules.\n \n\n{\\bf Temporal encoder} Thus, we tokenize a SITS record $\\mathbf{X} \\in \\mathbb{R}^{T \\times H\\times W \\times C}$ into a set of tokens of size $(N_T \\times N_H \\times N_W \\times d)$, as described in section \\ref{sec:tokenization} and subsequently reshape to $\\mathbf{Z_T} \\in \\mathbb{R}^{N_H N_W \\times N_T \\times d}$, to get a list of token timeseries for all patch locations. The input to the temporal encoder is:\n\\begin{equation}\\label{eq:temp_transf_input}\n \\mathbf{{Z^0_T}} = concat(\\mathbf{Z_{Tcls}}, \\mathbf{Z_T} + \\mathbf{P_T[t,:]}) \\in \\mathbb{R}^{N_HN_W \\times K+N_T\\times d}\n\\end{equation}\n\nwhere $\\mathbf{P_T[t,:]} \\in \\mathbb{R}^{N_T \\times d}$ and $\\mathbf{Z_{Tcls}} \\in \\mathbb{R}^{K \\times d}$ are respectively added and prepended to all $N_HN_W$ timeseries and $\\mathbf{t} \\in \\mathbb{R}^T$ is a vector containing all $T$ acquisition times. \nAll samples are then processed in parallel by a Transformer module. Consequently, the final feature map of the temporal encoder becomes $\\mathbf{Z^L_T} \\in \\mathbb{R}^{N_H N_W \\times K + N_T \\times d}$ in which the first $K$ tokens in the temporal dimension correspond to the prepended {\\it cls} tokens. We only keep these tokens, discarding the remaining $N_T$ vectors.\n\n{\\bf Spatial encoder} We now transpose the first and second dimensions in the temporal encoder output, to obtain a list of patch features $\\mathbf{Z_S} \\in \\mathbb{R}^{K \\times N_H N_W \\times d}$ for all output classes. In a similar spirit, the input to the spatial encoder becomes:\n\\begin{equation}\\label{eq:spatial_encoder_input}\n \\mathbf{{Z^0_S}} = concat(\\mathbf{Z_{Scls}}, \\mathbf{Z_S} + \\mathbf{P_S}) \\in \\mathbb{R}^{K \\times 1 + N_H N_W \\times d}\n\\end{equation}\n\nWhere $\\mathbf{P_S} \\in \\mathbb{R}^{N_H N_W \\times d}$ are respectively added to all $K$ spatial representations and each element of $\\mathbf{Z_{Scls}} \\in \\mathbb{R}^{K \\times 1 \\times d}$ is prepended to each class-specific feature map. We note, that while in the temporal encoder {\\it cls} tokens were prepended to all patch locations, now there is a single {\\it cls} token per spatial feature map such that $\\mathbf{Z_{Scls}}$ are used to gather global SITS-level information. Processing with the spatial encoder leads to a similar size output feature map $\\mathbf{{Z^L_S}} \\in \\mathbb{R}^{K \\times 1 + N_H N_W \\times d}$. \n\n\\subsection{Decoder architecture}\\label{sec:tsvit_decoder}\nThe TSViT encoder architecture described in the previous section is designed as a general backbone for SITS processing. To accommodate both global and dense prediction tasks we design two decoder heads which feed on different components of the encoder output. We view the output of the encoder as $\\mathbf{{Z^L_S}} = [\\mathbf{{Z^L_{Sglobal}}} | \\mathbf{{Z^L_{Slocal}}}]$ respectively corresponding to the states of the global and local {\\it cls} tokens.\nFor {\\bf image classification}, we only make use of $\\mathbf{{Z^L_{Sglobal}}} \\in \\mathbb{R}^{K \\times d}$. We proceed, as described in sec.\\ref{sec:tsvit_backbone}, by projecting each feature into a scalar value and concatenate these values to obtain global unormalised class probabilities as shown in Fig.\\ref{fig:TSViT_submodules}(d).\nComplementarily, for {\\bf semantic segmentation} we only use $\\mathbf{{Z^L_{Slocal}}} \\in \\mathbb{R}^{K \\times N_HN_W \\times d}$. These features encode information for the presence of each class over the spatial extent of each image patch. By projecting each feature into $hw$ dimensions and further reshaping the feature dimension to $(h \\times w)$ we obtain a set of class-specific probabilities for each pixel in a patch. It is possible now to merge these patches together into an output map $(H \\times W \\times K)$ which represents class probabilities for each pixel in the original image. This process is presented schematically in Fig.\\ref{fig:TSViT_submodules}(c).\n\n\n\n\n\\vspace{-0.1cm}\n\n\n\\subsection{Position encodings}\\label{sec:position_encodings}\nAs described in section \\ref{sec:tsvit_encoder}, positional encodings are injected in two different locations in our proposed network. First, temporal position encodings are added to all {\\it patch} tokens before processing by the temporal encoder as shown in eq.(\\ref{eq:temp_transf_input}). This operation aims at breaking the permutation invariance property of MSA by introducing time-specific position biases to all extracted {\\it patch} tokens. For crop recognition encoding the absolute temporal position of features is important as it helps identifying a plant's growth stage within the crop cycle. Furthermore, the time interval between successive images in SITS varies depending on acquisition times and other factors, such as the degree of cloudiness or corrupted data. To introduce acquisition-time-specific biases into the model, our temporal position encodings $\\mathbf{P_T[t,:]}$ depend directly on acquisition times $\\mathbf{t}$. More specifically, we make note of all the dates $\\mathbf{t'} = [t_1, t_2, ..., t_{T'}]$ corresponding to the acquisition times found in the training data and construct a lookup table $\\mathbf{P_T} \\in \\mathbb{R}^{T' \\times d}$ containing all learnt temporal position encodings indexed by date. Finding the date-specific encodings that need to be added to {\\it patch} tokens (eq.\\ref{eq:temp_transf_input}) reduces to looking up appropriate indices from $\\mathbf{P_T}$. In this way temporal position encodings introduce a dynamic prior of where to look at in the models' global temporal receptive field, rather than simply encoding the order of SITS acquisitions which would discard valuable information. \nFollowing token processing by the temporal encoder, spatial position embeddings $\\mathbf{P_S}$ are added to the extracted {\\it cls} tokens. These are not dynamic in nature and are similar to the position encodings used in the original ViT architecture, with the difference that these biases are now added to $K$ feature maps instead of a single one. \n\n\\vspace{-0.2cm}\n\\section{Experiments}\\label{sec:experiments}\nWe apply TSViT to two tasks using SITS records $\\mathbf{X} \\in \\mathbb{R}^{T \\times H\\times W \\times C}$ as inputs: classification and semantic segmentation. \nAt the object level, classification models learn a mapping $f(\\mathbf{X}) \\in \\mathbb{R}^{K}$ for the object occupying the center of the $H \\times W$ region. Semantic segmentation models learn a mapping $f(\\mathbf{X}) \\in \\mathbb{R}^{H \\times W \\times K}$, predicting class probabilities for each pixel over the spatial extent of the SITS record. We use an ablation study on semantic segmentation to guide model design and hyperparameter tuning and proceed with presenting our main results on three publicly available SITS semantic segmentation and classification datasets. \n\n\\vspace{-0.1cm}\n\n\\subsection{Training and evaluation}\n{\\bf Datasets} To evaluate the performance of our proposed semantic segmentation model we are using three publicly available {\\it S2} land cover recognition datasets. \nThe dataset presented in \\cite{Ru_wurm_2018} covers a densely cultivated area of interest of $102 \\times 42$ km$^2$ north of Munich, Germany and contains 17 distinct classes. Individual image samples cover a $240 \\times 240$ m$^2$ area ($24 \\times 24$ pixels) and contain 13 bands. \nThe PASTIS dataset \\cite{garnot_iccv} contains images from four different regions in France with diverse climate and crop distributions, spanning over 4000 km$^2$ and including 18 crop types. In total, it includes 2.4k SITS samples of size $128 \\times 128$, each containing 33-61 acquisitions and 10 image bands. Because the PASTIS sample size is too large for efficiently training TSViT with available hardware, we split each sample into $24 \\times 24$ patches and retain all acquisition times for a total of 57k samples. To accommodate a large set of experiments we only use fold 1 among the five folds provided in PASTIS.\nFinally, we use the T31TFM-1618 dataset \\cite{cscl} which covers a densely cultivated {\\it S2} tile in France for years 2016-18 and includes 20 distinct classes. In total, it includes 140k samples of size $48 \\times 48$, each containing 14-33 acquisitions and 13 image bands.\nFor the SITS classification experiments, we construct the datasets from the respective segmentation datasets. More specifically, for PASTIS we use the provided object instance ids to extract $24 \\times 24$ pixel regions whose center pixel falls inside each object and use the class of this object as the sample class. The remaining two datasets contain samples of smaller spatial extent, making the above strategy not feasible in practice. Here, we choose to retain the samples as they are and assign the class of the center pixel as the global class. We note that this strategy forces us to discard samples in which the center pixels belongs to the background class. Additional details are provided in the supplementary material.\n\n{\\bf Implementation details} \nFor all experiments presented we train for the same number of epochs using the provided data splits from the respective publications for a fair comparison. More specifically, we train on all datasets using the provided training sets and report results on the validation sets for Germany and T31TFM-1618, and on the test set for PASTIS. For training TSViT we use the AdamW optimizer \\cite{adamw} with a learning rate schedule which includes a warmup period starting from zero to a maximum value $10^{-3}$ at epoch 10, followed by cosine learning rate decay \\cite{loshchilov2017sgdr} down to $5*10^{-6}$ at the end of training. For Germany and T31TFM-1618 we train with the above settings and report the best performances between what we achieve and the original studies. Since we split PASTIS, we are training with both settings and report the best results. Overall, we find that our settings improve model performance. We train with a batch size of 16 or 32 and no regularization on $\\times 2$ Nvidia Titan Xp gpus in a data parallel fashion. All models are trained with a Masked Cross-Entropy loss, masking the effect of the background class in both training loss and evaluation metrics. We report overall accuracy (OA), averaged over pixels, and mean intersection over union (mIoU) averaged over classes.\nFor SITS classification, in addition to the 1D models presented in section \\ref{sec:related_work} we modify the best performing semantic segmentation models by aggregating extracted features across space prior to the application of a classifier, thus, outputing a single prediction. Classification models are trained with Focal loss \\cite{focal_loss}. We report OA and mean accuracy (mAcc) averaged over classes.\n\n\\begin{table}[!t]\n\\begin{center}\n\\begin{tabular}{c|cc|c}\n\\hline\nAblation & \\multicolumn{2}{c|}{Settings} & mIoU \\\\\n\\hline \\hline\n\\multirow{2}{*}{Factorization order}& \\multicolumn{2}{c|}{Spatial \\& Temporal} & 48.8\\\\\n &\\multicolumn{2}{c|}{{\\bf Temporal \\& Spatial}} & {\\bf 78.5}\\\\% & & & \n\\hline\n\\multirow{2}{*}{\\#{\\it cls} tokens} & \\multicolumn{2}{c|}{1} & 78.5\\\\\n & \\multicolumn{2}{c|}{{\\bf K}} & {\\bf 83.6}\\\\\n\\hline\n\\multirow{2}{*}{Position encodings} & \\multicolumn{2}{c|}{Static} & 80.8\\\\\n& \\multicolumn{2}{c|}{{\\bf Date lookup}} & {\\bf 83.6}\\\\\n\\hline\n\\multirow{4}{*}{ \\makecell{Interactions between \\\\ {\\it cls} tokens }} & Temporal & Spatial & \\\\\n\\cline{2-3}\n \n \n & \\pmb{\\checkmark} & \\checkmark & 81.5\\\\\n & \\checkmark & \\pmb{X} & {\\bf 83.6}\\\\\n\\hline\n\\multirow{3}{*}{Patch size}& \\multicolumn{2}{c|}{$\\mathbf{2 \\times 2}$} & {\\bf 84.8}\\\\\n & \\multicolumn{2}{c|}{$3 \\times 3$} & 83.6\\\\\n & \\multicolumn{2}{c|}{$4 \\times 4$} & 81.5\\\\\n & \\multicolumn{2}{c|}{$6 \\times 6$} & 79.6\\\\% & & & \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{{\\bf Ablation on design choices for TSViT}. All proposed design choices are found to have a positive effect on performance.}\n\\label{tab:ablation}\n\\end{table}\n\n\n\\begin{table*}[!h]\n\\tiny\n\\centering\n\\resizebox{0.9\\textwidth}{!}{\n\\begin{tabular}{l|ccc}\n\\hline\n\\textbf{Dataset} & \\textbf{Germany} \\cite{Ru_wurm_2018} & \\textbf{PASTIS} \\cite{garnot_iccv} & \\textbf{T31TFM-1618} \\cite{cscl} \\\\\n \\hline\n \\textbf{Model} & & \\textbf{Semantic segmentation (OA \/ mIoU)} & \\\\\n \\hline \n BiCGRU \\cite{Ru_wurm_2018} & 91.3 \/ 72.3 & 80.5 \/ 56.2 & 88.6 \/ 57.7 \\\\\n FPN-CLSTM \\cite{fpn_clstm} & 91.8 \/ 73.7 & 81.9 \/ 59.5 & 88.4 \/ 57.8 \\\\ \n UNET3D \\cite{Rustowicz2019SemanticSO} & 92.4 \/ 75.2 & 82.3 \/ 60.4 & 88.4 \/ 57.6 \\\\\n UNET3Df \\cite{cscl} & 92.4 \/ 75.4 & 82.1 \/ 60.2 & 88.6 \/ 57.7 \\\\\n UNET2D-CLSTM \\cite{Rustowicz2019SemanticSO} & 92.9\/ 76.2 & 82.7 \/ 60.7 & 89.0 \/ 58.8 \\\\\n U-TAE \\cite{garnot_iccv} & 93.1 \/ 77.1 & 82.9 \/ 62.4 (83.2 \/ 63.1) & 88.9 \/ 58.5 \\\\ \n{\\bf TSViT (ours)} & {\\bf 95.0 \/ 84.8} & {\\bf 83.4 \/ 65.1 (83.4\/ 65.4)} & {\\bf 90.3 \/ 63.1} \\\\ \n\\hline\n \\textbf{Model} & & \\textbf{Object classification (OA \/ mAcc)} & \\\\\n \\hline\n TempCNN$^*$ \\cite{tempcnn} & 89.8 \/ 78.4 & 84.8 \/ 69.1 & 84.7 \/ 62.6 \\\\\n DuPLo$^*$ \\cite{duplo} & 93.1 \/ 82.2 & 84.8 \/ 69.4 & 83.9 \/ 69.5 \\\\\n Transformer$^*$ \\cite{transformer_sat} & 92.4\/ 84.3 & 84.4 \/ 68.1 & 84.3 \/ 71.4 \\\\\n UNET3D \\cite{Rustowicz2019SemanticSO} & 92.7 \/ 83.9 & 84.8 \/ 70.2 & 84.8 \/ 71.4\\\\\n UNET2D-CLSTM \\cite{Rustowicz2019SemanticSO} & 93.0 \/ 84.0 & 84.7 \/ 70.3 & 84.7 \/ 71.6 \\\\\n U-TAE \\cite{garnot_iccv} & 92.6 \/ 83.7 & 84.9 \/ 71.8 & 84.8 \/ 71.7 \\\\ \n {\\bf TSViT (ours)} & {\\bf 94.7 \/ 88.1} & {\\bf 87.1 \/ 75.5} & {\\bf 87.8 \/ 74.2} \\\\\n\\hline\n\\end{tabular} \n}\n\\caption{{\\bf Comparison with state-of-the-art models from literature}. {\\bf (top)} Semantic segmentation. {\\bf (bottom)} Object classification. $^*$1D temporal only models. We report overall accuracy (OA), mean intersection over union (mIoU) and mean accuracy (mAcc). For PASTIS we report results for fold-1 only; average test set performance across all five folds is shown in parenthesis for direct comparison with \\cite{garnot_iccv}.}\n\\label{tab:results}\n\\end{table*}\n\n\\subsection{Ablation studies}\\label{sec:ablation}\nWe perform an ablation study on design parameters of our framework using the Germany dataset \\cite{Ru_wurm_2018}. Starting with a baseline TSViT with $L=4$ for both encoder networks, a single {\\it cls} token, $h=w=3, t=1, d=128$ we successively update our design after each ablation. Here, we present the effect of the most important design choices; additional ablations are presented in the supplementary material.\nOverall, we find that the {\\bf order of factorization} is the most important design choice in our proposed framework. Using a spatio-temporal factorization from the video recognition literature performs poorly at $48.8\\%$ mIoU. Changing the factorization order to temporo-spatial raises performance by an absolute $+29.7\\%$ to $78.5\\%$ mIoU. \nIncluding {\\bf additional \\textit{cls} tokens} increases performance to $83.6\\%$mIoU ($+5.1\\%$), so we proceed with using $K$ {\\it cls} tokens in our design. \nWe test the effect of our date-specific {\\bf position encodings} compared to a fixed set of values and find a significant $-2.8\\%$ performance drop from using fixed size $\\mathbf{P_T}$ compared to our proposed lookup encodings.\nAs discussed in section \\ref{sec:tsvit_encoder} our spatial encoder blocks cross \\textbf{\\textit{cls}-token interactions}. Allowing interactions among all tokens comes at a significant increase in compute cost, $\\mathcal{O}(K^2)$ to $\\mathcal{O}(K)$, and is found to decrease performance by $-2.1\\%$ mIoU.\nFinally, we find that smaller {\\bf patch sizes} generally work better, which is reasonable given that tokens retain a higher degree of spatial granularity and are used to predict smaller regions. Using $2\\times2$ patches raises performance by $+1.2\\%$mIoU to $84.8\\%$ compared to $3 \\times 3$ patches. \nOur final design which is used in the main experiments presented in Table \\ref{tab:results} employs a temporo-spatial design with $K$ {\\it cls} tokens, acquisition-time-specific position encodings, $2\\times 2$ input patches and four layers for both encoders. \n\n\n\n\n\\vspace{-0.1cm}\n\n\\begin{figure}[!t]\n\\begin{center}\n \\includegraphics[width=0.925\\linewidth]{latex\/figures\/sota_qualitative.png}\n\\end{center}\n \\caption{{\\bf Visualization of predictions} in Germany. The background class is shown in white, \"x\" indicates a false prediction.}\n\\label{fig:sota_qualitative}\n\\end{figure}\n\n\\vspace{-0.1cm}\n\n\\subsection{Comparison with SOTA}\nIn Table \\ref{tab:results} and Fig.\\ref{fig:overview}, we present performance results of our final TSViT design compared to state-of-the-art models presented in section \\ref{sec:related_work}. For semantic segmentation, we find that all models from literature perform similarly, with the BiCGRU being overall the worst performer, matching CNN-based architectures only in T31TFM-1618. For all datasets, TSViT outperforms previously suggested approaches by a very large margin. A visualization of predictions in Germany for the top-3 performers is shown in Fig.\\ref{fig:sota_qualitative}.\nIn object classification, we observe that 1D temporal models are generally outperformed by spatio-temporal models, with the exception of the Transformer \\cite{transformer_sat}. All 1D models perform poorly in PASTIS. Again, TSViT trained for classification consistently outperforms all other approaches by a large margin across all datasets. In both tasks, we find smaller improvements for the pixel-averaged compared to class-averaged metrics, which is reasonable given the large class imbalance that characterizes the datasets. \n\n\n\\vspace{-0.2cm}\n\n\\section{Conclusion}\\label{sec:conclusion}\nIn this paper we proposed TSViT, the first fully-attentional architecture for general SITS processing. By taking advantage of the Transformer's global receptive field, capacity to learn a rich feature space and by incorporating inductive biases that suit SITS data, we surpass the state-of-the-art performance by a large margin in object classification and semantic segmentation using three publicly available land cover recognition datasets. \n\n{\\small\n\\bibliographystyle{ieee_fullname}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzhczy b/data_all_eng_slimpj/shuffled/split2/finalzzhczy new file mode 100644 index 0000000000000000000000000000000000000000..5117e08238d8ba1dd6aa2a22bb59bd97ebf6094b --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzhczy @@ -0,0 +1,5 @@ +{"text":"\\section{Computer-assisted reducibility}\n\\label{sec:computer}\n\nIn this section we describe the algorithm used for the computer-assisted proof of Lemma~\\ref{lem:config}.\n\nThe input to the algorithm consists of a graph $H$ that describes a configuration: a graph and a degree function $d:V(H)\\rightarrow\\mathbb{N}$ that describes degrees of vertices of $H$ in graph $G$, e.g. for \\rxconf{c:23triangles-3} $d(x) = 2$, $d(w) = d(z) = 3$ and $d(y) = d(y) = 9$.\nAdditionally, the input specifies an edge $e_H \\in E(H)$, whose meaning will be explained in what follows.\n\n\nLet us first describe the structure of the proofs generated by our program \\texttt{reduce.py}, and next we will elaborate on how we implemented finding such proofs.\n\n\\subsection{Proof structure}\n\\label{sec:structure}\n\nThe proof structure is simple. We remove edge $e_H$ from $G$ obtaining a new graph $G'$, colour the graph $G'$ with a colouring $c'$ which exists by the minimality of $G$. Next, we uncolour all the edges of $E(H)$ and the goal is to find a proper colouring $c$ of all edges $G$ (together with $e_H$) such that $c|_{E(G)\\setminus E(H)}=c'|_{E(G)\\setminus E(H)}$. Note that this proof structure fits the proof of Lemma~\\ref{lem:c1} but does not fit the proof of Lemma~\\ref{lem:c3}, and in fact configuration~\\rxconf{c:2degree2neighbours} cannot be reduced this way.\n\nOf course, even a computer cannot enumerate all colourings of $G'$, since we do not know $G$, and although $H$ is fixed, the number of possible graphs $G$ is infinite. Instead, we enumerate certain {\\em classes} of colourings of $G'$.\nSuch a class is characterized by:\n\\begin{itemize}\n\t\\item $\\mathcal{C}=\\{C_v \\mid v\\in V(H)\\}$, where $C_v$ is a multiset of colours of the $d_G(v)-d_H(v)$ edges incident with $v$ outside $H$, for every vertex $v\\in V(H)$,\n\t\\item a set $\\mathcal{P}$ that contains a triple $(i, u, v)$ for $i\\in\\{1,2,3,4\\}$ and $u,v\\in V(H)$ iff (1) both $u$ and $v$ have exactly one incident edge in $E(G)\\setminus E(H)$ colored with $i$ and (2) there is an $i$-colored path in $E(G)\\setminus E(H)$ between $u$ and $v$ in the coloring $c'$. \n\\end{itemize}\nNote that the number of possible sets $\\mathcal{P}$ is bounded for a fixed configuration $(H,d)$. Hence we are able to enumerate all the classes of colourings of $G'$. Moreover, we claim it is sufficient to know just the class that $c'$ belongs to in order to check whether a colouring of $E(H)$ combined with $c'|_{E(G)\\setminus E(H)}$ is a proper colouring of $G$.\nIndeed, $\\mathcal{C}$ is sufficient to make sure that for every vertex of $H$ the number of occurrences of every colour $i$ in the combined colouring is correct (i.e., at most two for $i=1,2,3,4$ and at most one for $i=0$).\nSuch a colouring could still contain a monochromatic cycle. \nHowever, if there is such a cycle $C$, it means that there is a colour $i=1,2,3,4$ and a collection $\\mathcal{A}$ of $i$-coloured paths in $E(H)$ and another collection $\\mathcal{B}$ of $i$-coloured paths in $E(G)\\setminus E(H)$ such that $C$ is formed by paths coming alternately from $\\mathcal{A}$ and $\\mathcal{B}$. Clearly, $\\mathcal{P}$ is sufficient to guarantee that this does not happen.\n\nNote that the colouring of the inner edges of the configuration, i.e., $c'|_{E(H)\\setminus \\{e_H\\}}$ is not needed here.\nHowever, we do check if there is at least one colouring $c_{\\text{inner}}$ of ${E(H)\\setminus \\{e_H\\}}$ that is consistent with $\\mathcal{C}$ and $\\mathcal{P}$, because otherwise the pair $(\\mathcal{C},\\mathcal{P})$ does not correspond to any colour class and we can safely skip it. Here, {\\em consistent} means that $c_{\\text{inner}}$ can be extended by outer colorings in the class $(\\mathcal{C},\\mathcal{P})$, i.e., (1) for every $v\\in V(H)$, the multiset $C_v \\cup \\{c_{\\text{inner}}(vw) \\mid vw \\in E(H)\\setminus \\{e_H\\}\\}$ has at most two copies of every color and at most one copy of color $0$, (2) for every $(i,u,v)\\in \\mathcal{P}$ we have $i\\in C_u\\cap C_v$, and (3) after extending the graph $(V(H), {E(H)\\setminus \\{e_H\\}})$ colored with $c_{\\text{inner}}$ by colored edges corresponding to paths in $\\mathcal{P}$ we do not get monochromatic cycles. \n\nLet us call a reducibility proof with structure as above, a {\\em standard proof}.\n\nRecall that in our configurations each vertex has an upper bound on its degree in $G$ (for example 3 for vertex $w$ in $\\rxconf{c:23triangles-3}$). For some vertices this upper bound seems not to be specified (for example for vertex $u$ in $\\rxconf{c:23triangles-3}$) but then it is just equal to the maximum degree, i.e., 9. These upper bounds are exactly the values of function $d$ that is a part of the input to our program. \nThus one may think that the program verifies only reducibility for one particular choice of the values of degrees, i.e., the maximal values.\nHowever, this implies reducibility also for any degree function $d':V(G)\\rightarrow\\mathbb{N}$ such that for each $v\\in V(H)$, $d_H(v) \\le d'(v) \\le d(v)$, as shown in the lemma below.\nThe intuition behind the proof of this lemma should be clear: a standard reducibility proof for configuration $C$ works also for configuration $C'$.\n\n\\begin{lemma}\n\t\tLet $C = (H, d)$ and $C'= (H,d')$ be two configurations and assume that for every vertex $v \\in V(H)$ we have $d_H(v) \\le d'(v) \\le d(v)$. \n\t\tIf $C$ is reducible by a standard proof, then $C'$ is reducible.\n\\end{lemma}\n\n\\begin{proof}\n\tAssume $C'$ appears in a minimal counterexample $G\\in \\mathcal{G}$. \n\tThen we proceed as in the standard reducibility proof for $C$, i.e., we remove from $G$ the same edge $e$ of $H$ which was removed for $C$. \n\tSince $G\\in \\mathcal{G}$, there is a colouring $c':E(G)\\rightarrow\\{0,1,2,3,4\\}$ of $G'-e$.\n\tThis colouring defines sets $\\mathcal{C}'$ and $\\mathcal{P}'$, as described above, where $\\mathcal{C}'=\\{C'_w \\mid w\\in V(H)\\}$. For every $v\\in V(H)$ define $C_v$ as an arbitrary multiset of $d(v)-d_H(v)$ colors such that\n\t\\begin{equation}\n\tC'_v \\subseteq C_v \\subseteq \\{0,1,1,2,2,3,3,4,4\\}\\setminus \\{c'(vw)\\mid vw\\in E(H)\\setminus\\{e\\}\\}.\n\t\\label{eq:C_v}\n\t\\end{equation}\n\tAbove, we use the standard notions of inclusion and difference for multisets.\n\tNote that there is at least one candidate for $C_v$, because $|C'_v|=d'(v)-d_H(v)$ and $d'(v)\\le d(v)$. Define $\\mathcal{C} = \\{C_v\\mid v \\in V(H)\\}$.\n\t\n\tRecall that in the standard reducibility proof for $C$ we check all classes of colourings that are consistent with at least one colouring of $E(H)\\setminus \\{e_H\\}$. Since $(\\mathcal{C}',\\mathcal{P}')$ is consistent with $c'|_{E(H)\\setminus \\{e_H\\}}$, by~\\eqref{eq:C_v} also $(\\mathcal{C},\\mathcal{P}')$ is consistent with $c'|_{E(H)\\setminus \\{e_H\\}}$. Hence, the class $(\\mathcal{C},\\mathcal{P}')$ is checked in the standard reducibility proof for $C$, and this proof provides a coloring $c_{\\text{inner}}$ of $E(H)$ that can be extended by any coloring in $(\\mathcal{C},\\mathcal{P}')$. But this means that $c_{\\text{inner}}$ can be also extended by $c'|_{E(G)\\setminus E(H)}$, because no color appears at a vertex more than the required number of times and there are no monochromatic cycles. Hence $G$ can be colored and hence it is not a counterexample, a contradiction.\n\\end{proof}\t\n\n\n\n\n\n\\subsection{Implementation}\n\nScript {\\tt reduce.py} generates all classes of colourings of $G'$ as follows.\nFirst, the recursive function {\\tt generate\\_outside\\_colorings} generates all possible collections $\\mathcal{C}$ (as described in Section~\\ref{sec:structure}).\nAt the bottom of the recursion, i.e., when the construction of $\\mathcal{C}$ is finished, it calls another recursive function {\\tt generate\\_outer\\_paths}.\nThis function in turn, generates all possible sets $\\mathcal{P}$.\nAt the bottom of the recursion both $\\mathcal{C}$ and $\\mathcal{P}$ are constructed.\nThen, a function {\\tt color\\_inner} is called, which checks if there is at least one colouring of ${E(H)\\setminus {e_H}}$ that is consistent with $\\mathcal{C}$ and $\\mathcal{P}$.\nIf this is not the case, the pair $(\\mathcal{C},\\mathcal{P})$ is skipped.\nOtherwise, the function {\\tt color\\_inner} is called again, just this time with the parameter {\\tt remove\\_edge} set to false, which forces the function to colour the edge $e_H$ as well. Function {\\tt color\\_inner} is a straightforward recursive function which generates all colourings of $E(H)\\setminus \\{e_H\\}$ or $E(H)$ so that the number of edges of given colour at every vertex is as required.\nMoreover, after colouring each single edge $e=ab$ it checks if it closes a cycle with a path in $\\mathcal{P}$ and if not, it updates $\\mathcal{P}$, i.e., either joins two paths in $\\mathcal{P}$ (with endpoints in $a$ and $b$, respectively), or extends a path from $\\mathcal{P}$ (with an endpoint in $a$ or $b$) by $e$, or just adds a single edge path to $\\mathcal{P}$.\n\nWhile writing the code, it was our priority to make it as much self-explanatory as possible. We have chosen Python programming language for its high readability of the code. We deliberately have not exploit symmetries between colours to keep the algorithm and the argument simple. The price is the running time: it used 149 hours (on a single processor, Intel Xeon CPU E5-2698 v4, 2.20GHz) for the most demanding configuration \\rxconf{c:2weak3neighbours3neighbour}. The time used for other configurations varied from a couple of seconds to 94 minutes.\n\n\n\n\n\\section{Introduction}\n\nA {\\em linear forest} is a\nforest in which every connected component is a path. The linear arboricity\n${\\rm la}(G)$ of a graph $G$ is the minimum number of linear forests in $G$, whose union is the whole $G$.\nThe parameter was introduced by Harary~\\cite{harary} in 1970 and determining its value for various graph classes is a vital area of research by today.\n\nNote that since matchings are linear forests, linear arboricity lies between the chromatic index (partitioning into matchings) and the arboricity (partitioning into forests).\nThe connection with chromatic index can be deeper than it seems, namely it is conjectured that linear arboricity enjoys an analogue of Vizing's theorem.\nThis analogue is called Linear Arboricity Conjecture~\\cite{AEH, alon_ijm} and states that for every graph $G$, we have $\\ceil{\\tfrac{\\Delta}{2}}\\le{\\rm la}(G)\\le\\ceil{\\tfrac{\\Delta+1}{2}}$ (the lower bound being trivial).\nThe conjecture has been proved for $\\Delta\\in\\{3,4,5,6,8,10\\}$~\\cite{AEH,akiyama2,EP,Guldan}, for complete bipartite graphs~\\cite{AEH}, for planar graphs~\\cite{Wu,WW} and recently for 3-degenerate graphs~\\cite{BasavarajuBFP20}.\n\n\nIn this paper we focus on planar graphs.\nCygan et al.~\\cite{CyganHKLW12} proved the following theorem.\n\n\\begin{theorem}[\\cite{CyganHKLW12}]\\label{th:cygan}\n\tFor any planar graph $G$ with maximum degree $\\Delta \\geq 9$, we have ${\\rm la}(G)=\\lceil\\tfrac{\\Delta}{2}\\rceil$.\n\\end{theorem}\n\n\nThey have posed the following conjecture.\n\n\\begin{conjecture}[Planar Linear Arboricity Conjecture~\\cite{CyganHKLW12}]\n\t\\label{plac}\n\tFor any planar graph $G$ of maximum degree $\\Delta\\ge 5$, we have ${\\rm la}(G)=\\ceil{\\tfrac{\\Delta}{2}}$.\n\\end{conjecture}\n\nNote that Conjecture~\\ref{plac} implies the Vizing Planar Graph Conjecture~\\cite{Vizing65}, stating that any planar graph $G$ of maximum degree $\\Delta\\ge 6$ has chromatic index $\\Delta$ (currently open only for $\\Delta=6$).\n\nThe equality ${\\rm la}(G)=\\ceil{\\tfrac{\\Delta}{2}}$ holds for all odd values of $\\Delta$, which follows from the weaker upper bound ${\\rm la}(G)\\le\\ceil{\\tfrac{\\Delta+1}{2}}$~\\cite{Wu,WW}.\nClearly, the odd case gives much more freedom, since then for each vertex $v$ there is at least one linear forest which has at most one edge incident with $v$.\nThe motivating question of this work is {\\em Can we use this freedom to get an even tighter result?}\nMore specifically, we conjecture the following.\n\n\\begin{conjecture}\n\t\\label{con:new}\n\tFor every planar graph $G$ of odd maximum degree $\\Delta\\ge 7$ the edges of $G$ can be partitioned into $\\tfrac{\\Delta-1}{2}$ linear forests and one matching.\n\\end{conjecture}\n\nNote that Conjecture~\\ref{con:new} cannot be extended to $\\Delta\\in\\{3,5\\}$.\nThis is because Conjecture~\\ref{con:new} implies that the graph under consideration has chromatic index $\\Delta$ (color each linear forests with two colours and the matching forms the last colour).\nHowever, the 4-clique and the icosahedron, each with one edge subdivided, are well-known to have chromatic index $\\Delta+1$ ~\\cite{Vizing65}.\n\nOn the other hand, known results imply Conjecture~\\ref{con:new} for $\\Delta\\ge 11$.\nIndeed, pick a $\\Delta$-edge colouring of $G$ which exists by a theorem of Vizing~\\cite{Vizing65}.\nLet $M$ be the matching formed by an arbitrary colour in the colouring.\nThen $G-M$ has maximum degree $\\Delta-1$, which is even.\nThe claim follows by combining the matching $M$ with the partition of $G-M$ into $(\\Delta(G)-1)\/2$ linear forests, obtained using Theorem~\\ref{th:cygan}.\n\nThus, the only two missing cases of Conjecture~\\ref{con:new} are $\\Delta=7$ and $\\Delta=9$. In this work, we resolve the latter, as follows.\n\n\\begin{theorem}\\label{th:main}\nFor any planar graph $G$ with maximum degree $\\Delta(G) \\leq 9$, we can partition the edges of $G$ into four linear forests and a matching.\n\\end{theorem}\n\nNote that Theorem~\\ref{th:main} strengthens well-known results stating that graphs in this class have chromatic index $\\Delta$ [Vizing, 1965] and linear arboricity at most $\\ceil{(\\Delta+1)\/2}$ [Wu 1999]. By combining Theorem~\\ref{th:main} with Theorem~\\ref{th:cygan} and the argument for the case $\\Delta\\ge 11$ of Conjecture~\\ref{con:new} described above, we get the following corollary.\n\n\\begin{corollary}\nLet $G$ be a planar graph with maximum degree $\\Delta(G) \\geq 9$.\nIf $\\Delta(G)$ is even, the edges of $G$ can be partitioned into $\\Delta(G)\/2$ linear forests.\nIf $\\Delta(G)$ is odd, the edges of $G$ can be partitioned into $(\\Delta(G)-1)\/2$ linear forests and a matching.\n\\end{corollary}\n\nTheorem~\\ref{th:main} is proved with the discharging method, by now a standard tool in colouring of planar graphs. It can be sketched as follows. We find a set of 10 configurations which cannot appear in a minimal counterexample, i.e., are {\\em reducible}. Then, we show that Euler's formula implies that a minimal counterexample has to contain one of these configurations.\nUnfortunately, the nature of our kind of colouring reveals much less symmetry than, for example, edge colouring or `pure' linear arboricity. This is because in our case a colour forms either a linear forest or a matching. As a result, the reducibility proofs for configurations become even more lengthy and tedious than usual. This is why here we include traditional hand-made proofs for only two configurations, while the reducibility of all the remaining ones is verified by a computer program.\n\nThe program has been extensively tested for a number of configurations. We compared results of two different implementations, one in C++, and another in Python.\nThe Python implementation (150 lines of code, excluding comments) was optimized for readability and is publicly available in a Github repository \\url{https:\/\/github.com\/lkowalik\/linear-arboricity}. We give a detailed description of the program in Section~\\ref{sec:computer} (and in the comments included in the code).\n\n\\section{Preliminaries}\nAll graphs in this paper are simple, i.e., do not contain multiple edges or loops. By a triangle we mean a cycle of length $3$.\n\nFor a graph $G$ and a vertex $v$ of $G$, by $d_G(v)$ we denote the degree of $v$ in $G$, and we omit the subscript when it is clear from the context.\nA vertex of degree $d$ is called a {\\em $d$-vertex}. Notation {\\em $d^+$-vertex} (resp. {\\em $d^-$-vertex}) means that this vertex is of degree at least (resp. at most) $d$.\n\nA neighbour of degree $d$ adjacent to a vertex $v$ is called a {\\em $d$-neighbour} of $v$. A neighbour of degree at least (resp. at most) $d$ to a vertex $v$ is called a {\\em $d^+$-neighbour} (resp. {\\em $d^-$-neighbour}) of $v$. Given two vertices $u$ and $v$, the vertex $u$ is a \\emph{weak neighbour} (resp. \\emph{semi-weak}) of $v$ if the edge $(u,v)$ belongs to exactly $2$ (resp. exactly $1$) triangles.\n\nAs $\\ell(f)$ we denote the number of edges incident to a face $f$. A face is $big$ if $\\ell(f)\\geq 4$.\n\n\n\n A {\\em facial walk} $w$ corresponding\nto a face $f$ is the shortest closed walk induced by all edges incident with $f$.\n\nLet $f$ be a face and let $v_0, v_1, \\ldots, v_{\\ell}$ be the facial walk of $f$, $v_{\\ell}=v_0$. Then, for every $i=0,\\ldots,\\ell-1$, the triple $s=(v_{i-1}, v_i, v_{i+1})$ will be called a {\\em $v_i$-segment} or just {\\em segment} (indices considered modulo $\\ell$).\nThe {\\em length} of $s$ is defined as the length of $f$.\nWe say that $v_i$ is {\\em incident} to $s$. We stress that $v_{i-1}$, $v_{i+1}$ are not incident to $s$.\nA \\emph{triangular segment} is a segment $s = (x, y, z)$ where $x$ and $z$ are connected with an edge.\n\nNote that if $v$ is not a cutvertex, there is a one-to-one correspondence between $v$-segments and faces incident to $v$. When $v$ is a cutvertex, a $v$-segment $(x,v,y)$ can be thought of as a region of the face incident with edges $vx$ and $vy$.\n\n\\section{Proof of Theorem~\\ref{th:main}}\\label{sect:thm}\nLet $\\mathcal{G}$ be a family of minimal counterexamples, i.e., $G \\in \\mathcal{G}$ if $G$ is a simple planar graph with $\\Delta(G) \\leq 9$ whose edges cannot be partitioned into four linear forests and a matching and, among such graphs, $G$ has the minimum possible number of edges.\n\n\\subsection{Structure of a minimal counterexample}. We define configurations \\rxconf{c:edge} to \\rxconf{c:2neighbour33bigface} (see Figure~\\ref{fig:config}).\n\\begin{itemize}\n\\item \\rconf{c:edge}{is an edge $uv$ with $d(u)+d(v) \\leq 10$.}\n\\item \\rconf{c:kite}{is a triangular segment $(u,v,w)$ where $u$ has another neighbour $x$ such that $d(v)=d(x)=11-d(u)$.}\n\\item \\rconf{c:2degree2neighbours}{is a vertex with two $2$-neighbours.}\n\\item \\rconf{c:2consweak3neighbours}{is a vertex $u$ with neighbors $x$, $y$, $w$, $s$, $t$, where $d(y)=d(s)=3$, and $xywst$ is a path.}\n\\item \\rconf{c:2weak3neighbours3neighbour}{is a vertex $u$ with three $3$-neighbours, from which at least two are weak and $u$ is the only common neighbour of any pair of weak $3$-neighbours of $u$.}\n\\item \\rconf{c:weak3degree2}{is a vertex $u$ with both a weak $3$-neighbour and a $2$-neighbour.}\n\\item \\rconf{c:8weak3degree4}{is a vertex $u$ of degree $8$ with both a weak $3$-neighbour and a $4$-neighbour.}\n\\item \\rconf{c:smalltriangles}{is a triangle $(u, v, w)$ where $d(u)\\le 5$, $d(v)\\le 6$, $d(w)\\leq 8$.}\n\\item \\rconf{c:23triangles-3}{is a vertex $u$ with a $3$-neighbours $w$ and $z$, a $2$-neighbour $x$, and a neighbor $y$ which is adjacent to both $z$ and $x$.}\n\\item \\rconf{c:2neighbour33bigface}{is a vertex $u$ with neighbors $v$, $x$, $y$, $z$, $s$, $t$, where $d(v)=2$, $d(y)=d(s)=3$, $y$ and $s$ share a common neigbour $z$ other than $u$, $x$ is adjacent to $y$ and $t$ is adjacent to $s$.}\n\\end{itemize}\n\n\\input{configurationspicture}\n\nWe stress here that configurations are defined as graphs (think: subgraphs of $G \\in \\mathcal{G}$) and not as plane embeddings of graphs. Hence, for example when we write that $G$ contains \\rxconf{c:kite} it does not imply that the triangle $(u,v,w)$ bounds a triangular face.\n\n\nWe now present human-made proofs that configurations \\rxconf{c:edge} and \\rxconf{c:2degree2neighbours} are reducible. In these proofs we use some common notation as follows.\nWe treat partitions to linear forests and a matching as colourings, i.e., functions $c: E(G)\\rightarrow \\{0,\\ldots , 4\\}$ such that for $i=1, \\ldots , 4$ we have that $c^{-1}(i)$ is a linear forest and $c^{-1}(0)$ is a matching. By $d_i(c,v)$ we denote the number of edges which are incident with $v$ and coloured with $i$ in the colouring $c$.\n\n\n\\begin{lemma}\\label{lem:c1}\n\tFor every $G \\in \\mathcal{G}$, graph $G$ does not contain configuration \\rxconf{c:edge}.\n\\end{lemma}\n\\begin{proof}\n\tAssume for a contradiction that there is an edge $uv$ such that $d(u)+d(v)\\le 10$.\n\tLet $G'$ be the planar graph obtained from $G$ by removing edge $uv$.\n\tSince $G$ is a minimal counterexample, there is a colouring $c'$ of $G'$.\n\n\tNow we extend $c'$ to a colouring $c$ of $G$ to get a contradiction.\n\tThere are two cases to consider.\n\tIf there is a colour $i\\in\\{1,2,3,4\\}$ such that $d_i(c',u)+d_i(c',v)\\le 1$ we put $c(uv)=i$. Then $d_i(c,u) = 1$ and $d_i(c,v) \\le 2$ or vice versa, so the edges of colour $i$ still form a collection of paths, and $c$ is as desired.\n\tHence we can assume that for every colour $i\\in\\{1,2,3,4\\}$ we have $d_i(c',u)+d_i(c',v)=2$. Then, since $d_{G'}(u)+d_{G'}(v)\\le 8$ we get that $d_0(c',u)=d_0(c',v)=0$. We simply put $c(uv)=0$.\n\\end{proof}\n\n\\begin{corollary}\\label{lem:no1degree}\n\tFor every $G \\in \\mathcal{G}$, graph $G$ does not contain $1$-vertices.\n\\end{corollary}\n\\begin{proof}\n Immediate from Lemma~\\ref{lem:c1}, since $\\Delta(G)\\le 9$.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:c3}\n\t\tFor every $G \\in \\mathcal{G}$, if graph $G$ does not contain \\rxconf{c:kite}, then $G$ does not contain configuration~\\rxconf{c:2degree2neighbours}.\n\\end{lemma}\n\\begin{proof}\n\n\t\\begin{figure}[ht]\n\t\t\\centering\n\t\t\\subfloat[][]{\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.95]\n\t\t\t\\tikzstyle{whitenode}=[draw,circle,fill=white,minimum size=8pt,inner sep=0pt]\n\t\t\t\\tikzstyle{blacknode}=[draw,circle,fill=black,minimum size=6pt,inner sep=0pt]\n\t\t\t\\tikzstyle{tnode}=[draw,ellipse,fill=white,minimum size=8pt,inner sep=0pt]\n\t\t\t\\tikzstyle{texte}=[fill=white, text=black]\n\t\t\t\\draw (2,-3) node[whitenode] (v) [label=right:$v$] {}\n\t\t\t-- ++(60:1cm) node[whitenode] (y) [label=right:$y$] {\\tiny{$2$}}\n\t\t\t-- ++(120:1cm) node[whitenode] (a) [] {}\n\t\t\t-- ++(240:1cm) node[whitenode] (x) [label=left:$x$] {\\tiny{$2$}};\n\t\t\t\\draw (x) edge [] node [label=left:] {} (v);\n\t\t\t\\node[right=0pt] at (2.2,-1.2) {$a=b$};\n\t\t\t\\end{tikzpicture}\n\t\t\n\t\t\t\\label{fig:claim3case1}\n\t\t}\n\t\t\\subfloat[][]{\n\t\t\t\\centering\n\t\t\t\\begin{tikzpicture}[scale=0.95]\n\t\t\t\\tikzstyle{whitenode}=[draw,circle,fill=white,minimum size=8pt,inner sep=0pt]\n\t\t\t\\tikzstyle{blacknode}=[draw,circle,fill=black,minimum size=6pt,inner sep=0pt]\n\t\t\t\\tikzstyle{tnode}=[draw,ellipse,fill=white,minimum size=8pt,inner sep=0pt]\n\t\t\t\\tikzstyle{texte}=[fill=white, text=black]\n\t\t\t\\draw (0,-3) node[whitenode] (a) [label=below:$a$] {}\n\t\t\t-- ++(360:1cm) node[whitenode] (x) [label=below:$x$] {\\tiny{$2$}}\n\t\t\t-- ++(360:1cm) node[whitenode] (v) [label=below:$v$] {}\n\t\t\t-- ++(360:1cm) node[whitenode] (y) [label=below:$y$] {\\tiny{$2$}}\n\t\t\t-- ++(360:1cm) node[whitenode] (b) [label=below:$b$] {};\n\t\t\t\\end{tikzpicture}\n\t\t\n\t\t\t\\label{fig:claim3case2}\n\t\t}\n\t\t\\caption{\\label{fig:config3} Two possible cases when configuration \\rxconf{c:2degree2neighbours} appears.}\n\t\\end{figure}\n\n\tSuppose for a contradiction that a vertex $v$ has two $2$-neighbours namely $x$ and $y$. Let $a$ (resp. $b$) be the neighbour of $x$ (resp. $y$) different than $v$. Note that $a$ may coincide with $b$ (see Fig.~\\ref{fig:config3}).\n\n\tWe consider two cases.\n\n\t\\begin{enumerate}\n\t\t\\item $a = b$\n\n\t\tConsider $G' = (G - x) \\cup \\{av\\}$. Note that $d_{G'}(v)=d_G(v) = 9$, as \\rxconf{c:edge} is excluded in $G$. Moreover, $G'$ is simple, because if $av\\in E(G)$, then $G$ contains \\rxconf{c:kite} (triangle $(v, x, a)$ and edge $vy$). By the minimality of $G$ there exists a colouring $c'$ of $G'$. Now we define a colouring $c$ of $G$. For every edge $e\\in E(G) \\setminus \\{ ax,\\ ay,\\ vx,\\ vy\\}$ we put $c(e) = c'(e)$.\n\t\tLet $\\alpha$, $\\beta$, $\\gamma$ be the colours in $c'$ of $ay$, $yv$, $av$ respectively.\n\t\tWe put $c(ay)=\\alpha $, $c(yv)=\\gamma$, $c(ax) = \\gamma$ and $c(xv) = \\beta$. Note that $d_i(c',v) = d_i(c, v)$ and $d_i(c', a) = d_i(c,a)$ for $i = 0, \\ldots, 4$. Moreover, $d_i(c,x)$, $d_i(c,y)$ are not greater than $2$ for $i = 1,\\ 2,\\ 3,\\ 4$ and $d_0(c,x)$, $d_0(c,y)$ are not greater than $1$, as otherwise $c'$ contains incident edges coloured with $0$.\n\t\tHence, it suffices to show that there is no monochromatic cycle in $c$.\n\n\t\tAssume there is a monochromatic cycle $C$ in $c$.\n\t\tThen $C$ contains one of the edges of $ax$, $ay$, $bx$, $by$, because otherwise $C$ is monochromatic in $c'$, a contradiction.\n\t\tSince $d_G(x)=2$ and $d_G(y)=2$, it follows that $C$ contains the path $axv$ or $ayv$.\n\t\tBy symmetry assume the former.\n\t\tIt follows that $\\beta=\\gamma$ and $C$ is coloured by $\\gamma$.\n\t\tThen $C$ contains $vy$ (because $vy$ is coloured by $\\gamma$).\n\t\tSince $d_G(y)=2$, also $ya \\in C$.\n\t\tHence, $\\alpha=\\gamma$.\n\t\tThus, we obtained that $\\alpha=\\beta=\\gamma$ and the triangle $ayv$ was monochromatic in $c'$, a contradiction.\n\n\t\t\\item $a \\neq b$\n\t\t\n Since we excluded configuration \\rxconf{c:edge}, $d(v) = 9$.\n\t\tNote that $G$ contains neither edge $av$ nor $vb$ as we excluded configuration \\rxconf{c:kite}. Consider a simple graph $G' = G\\setminus \\{x,y\\} \\cup \\{av,\\ vb\\} $. By the minimality of $G$ there exists a colouring $c'$ of $G'$. Now we define a colouring of $G$. For every edge $e\\in E(G) \\setminus \\{ ax,\\ xv,\\ vy,\\ yb\\}$ we put $c(e) = c'(e)$.\n\t\tLet $\\alpha$ and $\\beta$ be the colours of $av$ and $vb$ respectively.\n\t\tWe can put $c(ax)=\\alpha,\\ c(xv)=\\beta,\\ c(vy)=\\alpha,\\ c(yb)=\\beta$. Note that $d_i(c',v) = d_i(c,v),\\ d_i(c', a) = d_i(c, a),\\ d_i(c', b) = d_i(c, b)$ for $i = 0,\\ldots, 4$. Moreover, $d_i(c,x),\\ d_i(c,y)$ are not greater than $2$ for $ i = 1,\\ 2,\\ 3,\\ 4$ and $d_0(c,x),\\ d_0(c,y)$ are not greater than $1$, otherwise $c'$ contains incident edges coloured with $0$.\n\n\t\tWe also claim that there is no monochromatic cycle in $c$. Indeed, if there is a cycle formed by edges in colour $\\alpha$ (resp. $\\beta$), then it goes through $x$ and $y$, so $\\alpha = \\beta$ and there is a monochromatic cycle in $c'$, a contradiction.\n\t\\end{enumerate}\n\n\n\n\\end{proof}\n\n\n\n\\begin{lemma}\\label{lem:config}\n\tFor every $G \\in \\mathcal{G}$, graph $G$ does not contain any of configurations \\rxconf{c:kite} and \\rxconf{c:2consweak3neighbours} to \\rxconf{c:2neighbour33bigface}.\n\\end{lemma}\n\n\\begin{proof}\nThis proof is done by a computer program, see Section~\\ref{sec:computer} for details.\n\\end{proof}\n\n\n\\subsection{A minimal counterexample does not exist}\n{\\bf Initial charge}. We consider a planar embedding $\\mathcal{M}=(V,E,F)$ of $G \\in \\mathcal{G}$ and attribute a charge $\\text{ch}(v)=d(v)-4$ to each vertex $v$ and a charge $\\text{ch}(f)=\\ell (f)-4$ to each face $f$.\n\n\\vspace{3mm}\n\\noindent {\\bf Discharging rules} We introduce four discharging rules as follows:\n\\begin{itemize}\n\\item \\rrule{r:degree2}{applies to a vertex $u$ with a neighbour $v$ of degree $2$. Then $u$ sends $1$ to $v$.}\n\\item \\rrule{r:degree3}{applies to a vertex $u$ with a neighbour $v$ of degree $3$. Then $u$ sends $\\tfrac13$ to $v$.}\n\\item \\rrule{r:bigface}{applies to a face $f$ of length $5^+$ and its segment $(x,y,z)$. If $d(x)=d(z)=3$, then $f$ sends $\\tfrac23$ to $y$ through $(x,y,z)$.\n\tIf $d(x)=2$ or $d(z)=2$, then $f$ sends $\\tfrac12$ to $y$ through $(x,y,z)$.}\n\\item \\rrule{r:triangle}{applies to a vertex $u$ incident to a triangular face $f = (v,u,w)$. Then $u$ sends $m(d(v),d(u),d(w))$ through segment $(v, u, w)$ to $f$, as described in Table~\\ref{table:triangle}.}\n\\end{itemize}\n\nIn \\rxrule{r:bigface} we send a charge from a face to segments and then they pass the charge to vertices.\nThis may look overcomplicated at the moment, but will become handy in the proof.\n\n\\tabulinesep=1.5mm\n\\begin{table}\n\\centering\n\\begin{tabu}{c|c|c|c|c|c|c|c|}\n\\cline{2-8}\n & \\multicolumn{1}{ c }{\\multirow{3}{*}{$a \\leq 4$}} & \\multicolumn{1}{ |c }{\\multirow{3}{*}{$a=5$}} & \\multicolumn{1}{ |c | }{\\multirow{3}{*}{$a =6$}} & \\multicolumn{4}{c|}{$a \\geq 7$}\\\\\n \\cline{5-8}\n & \\multicolumn{1}{ c }{} & \\multicolumn{1}{ |c }{} & \\multicolumn{1}{ |c | }{} & \\multicolumn{1}{c|}{\\multirow{2}{*}{$b\\leq 4$}} & \\multicolumn{2}{c|}{$b=5$} & \\multicolumn{1}{ c| }{\\multirow{2}{*}{$b \\geq 6$}} \\\\\n \\cline{6-7}\n & \\multicolumn{1}{ c }{} & \\multicolumn{1}{ |c }{} & \\multicolumn{1}{ |c }{} & \\multicolumn{1}{ |c| }{} & $c=6$ & $c \\geq 7$ & \\multicolumn{1}{ c | }{} \\\\\n \\hline\n \\multicolumn{1}{ |c | }{$m(b,a,c)$} & $0$ & $\\tfrac{1}{5}$ & $\\tfrac13$ & $\\tfrac12$ & $\\tfrac{7}{15}$ & $\\tfrac25$ & $\\tfrac13$ \\\\\n \\hline\n\\end{tabu}\n\\caption{\\label{table:triangle}A table to define the weight redistribution by Rule~\\ref{r:triangle}. We consider $b \\leq c$ and set $m(c,a,b)=m(b,a,c)$. Note that the case $b=c=5$ is not used in Rule~\\ref{r:triangle}, because this would mean that edge $vw$ contradicts Lemma~\\ref{lem:c1}.}\n\\end{table}\n\n\nIn what follows, we will show that for every face and for every vertex, the final charge, after applying the rules, is non-negative.\n\n\\begin{lemma}\\label{l:facescharge}\n Let $G$ be a graph belonging to the family $\\mathcal{G}$ and let $\\mathcal{M}$ be its planar embedding. Then, after applying the discharging rules to $G$, for every face $f$ of $\\mathcal{M}$ its final charge is non-negative.\n\\end{lemma}\n\\begin{proof}\n Let $f$ be a face of $\\mathcal{M}$. Recall that by $\\ell(f)$ we denote the length of $f$. We consider following cases.\n\\begin{itemize}\n \\item ${\\ell(f)=3}$\n\n The initial charge of $f$ is $-1$. It receives the charge from vertices on its boundary depending on their degree by \\rxrule{r:triangle}. Consider a few subcases depending on the sorted degree sequence of the vertices on $f$ (note that these are the only possible triangles as we excluded \\rxconf{c:edge}):\n \\begin{itemize}\n \\item $(4^-, 7^+, 7^+)$. The $7^+$ vertices send $\\tfrac12$ to $f$, so its received charge is $2\\cdot \\tfrac12 = 1$.\n \\item$(5,6, 8^-)$. These triangles are excluded by \\rxconf{c:smalltriangles}.\n \\item $(5,6,9)$. Then, $f$ receives $\\tfrac15 +\\tfrac13+\\tfrac7{15} = 1$ from its vertices.\n \\item $(5,7^+, 7^+)$. Then, $f$ receives $\\tfrac15 + \\tfrac25+\\tfrac25 = 1$ from its vertices.\n \\item $(6^+, 6^+, 6^+)$. Then, $f$ receives at least $3\\cdot \\tfrac13 = 1$ from its vertices.\n \\end{itemize}\n In each case $f$ receives at least $1$, so its final charge is non-negative.\n \\item ${\\ell(f)=4}$\n\n The initial and the final charge of $f$ is $0$, as it does not send or receive any charge.\n \\item ${\\ell(f)=5}$\n\n The initial charge of $f$ is $1$. Note that since \\rxconf{c:edge}, \\rxconf{c:2degree2neighbours} and $1$-vertices are excluded, $f$~has either two vertices of degree $3$ or at most one of degree $2$ on its boundary. In these situations $f$ sends $\\tfrac23$ or at most $2\\cdot \\tfrac12$, respectively, by \\rxrule{r:bigface}. In both cases its final charge remains non-negative.\n\n \\begin{figure}[ht]\n \\centering\n \\subfloat[][]{\n \\centering\n \\begin{tikzpicture}[scale=0.95]\n \\tikzstyle{whitenode}=[draw,circle,fill=white,minimum size=8pt,inner sep=0pt]\n \\tikzstyle{blacknode}=[draw,circle,fill=black,minimum size=6pt,inner sep=0pt]\n \\tikzstyle{tnode}=[draw,ellipse,fill=white,minimum size=8pt,inner sep=0pt]\n \\tikzstyle{texte} =[fill=white, text=black]\n \\draw (2,-3) node[whitenode] (v) [] {}\n -- ++(72:1cm) node[whitenode] (u) [] {\\tiny{3}}\n -- ++(144:1cm) node[whitenode] (x1) [] {}\n -- ++(216:1cm) node[whitenode] (x2) [] {}\n -- ++(288:1cm) node[whitenode] (x3) [] {\\tiny{3}};\n \\draw (x3) edge [] node [label=left:] {} (v);\n \\end{tikzpicture}\n \n \\label{fig:c2casea}\n }\n \\subfloat[][]{\n \\centering\n \\begin{tikzpicture}[scale=0.95]\n \\tikzstyle{whitenode}=[draw,circle,fill=white,minimum size=8pt,inner sep=0pt]\n \\tikzstyle{blacknode}=[draw,circle,fill=black,minimum size=6pt,inner sep=0pt]\n \\tikzstyle{tnode}=[draw,ellipse,fill=white,minimum size=8pt,inner sep=0pt]\n \\tikzstyle{texte} =[fill=white, text=black]\n \\draw (2,-3) node[whitenode] (v) [] {}\n -- ++(72:1cm) node[whitenode] (u) [] {\\tiny{$2$}}\n -- ++(144:1cm) node[whitenode] (x1) [] {}\n -- ++(216:1cm) node[whitenode] (x2) [] {}\n -- ++(288:1cm) node[whitenode] (x3) [] {};\n \\draw (x3) edge [] node [label=left:] {} (v);\n \\end{tikzpicture}\n \n \\label{fig:c2caseb}\n }\n \\caption{\\label{fig:5face} $5$-faces sending any charge to vertices}\n \\end{figure}\n \\item ${\\ell(f)\\geq 6}$\n\n We will show the following claim.\n\n \\begin{claim}\\label{l:face6}\n \t$f$ sends at most $\\ell(f)\/3$ of charge.\n \\end{claim}\n \\begin{proof}\n \t\\renewcommand\\qedsymbol{$\\lrcorner$}\n Let us implement \\rxrule{r:bigface} in the following way.\n\t\\begin{quote} \\it\n\t\tFace $f$ sends $\\tfrac13$ to every edge on its boundary.\n\tNext, for every edge $e = uv$ on the boundary of $f$ we do the following.\n\tLet $xuvy$ be a fragment of the facial walk of $f$.\n\tIf $v$ is a $2$- or $3$-vertex, $e$ sends $\\tfrac13$ to $u$ through $(x,u,v)$.\n\tSimilarly, if $u$ is a $2$- or $3$-vertex, $e$ sends $\\tfrac13$ to $v$ through $(u,v,y)$.\n\tOtherwise, $e$ sends $\\tfrac16$ both to $u$ and $v$ through $(x,u,v)$ and $(u,v,y)$, respectively.\n\t\\end{quote}\n\tNote that every edge sends at most $\\tfrac13$ of charge, as $3^-$-vertices cannot be connected by an edge, because \\rxconf{c:edge} is excluded.\n\tHence, it suffices to show that for every segment $s=(u,v,w)$, if $v$ fulfills the conditions from \\rxrule{r:bigface}, then $v$ gets appropriate charge from $s$.\n\tIndeed, if $d(u)=d(w)=3$, then $v$ gets $\\tfrac13$ through $s$ from both $uv$ and $vw$, so it gets $\\tfrac23$ as required.\n\tOtherwise, $d(u)=2$ or $d(w)=2$, by symmetry assume the latter.\n\tThen, $v$ gets $\\tfrac13$ through $s$ from $vw$ and at least $\\tfrac16$ from $vu$ through $s$, so $v$ gets at least $\\tfrac12$ through $s$, as required.\n \\end{proof}\n\n Since the initial charge of $f$ is $\\ell(f)-2$ and $\\ell(f)-2-\\ell(f)\/3\\ge 0$ whenever ${\\ell(f)\\geq 6}$, this concludes the proof of Lemma~\\ref{l:facescharge}.\n\\end{itemize}\n\\end{proof}\n\n\\begin{lemma}\\label{l:vertexcharge}\n Let $G$ be a graph belonging to the family $\\mathcal{G}$. Then, after applying the discharging rules to $G$, the final charge of every vertex $v$ is non-negative.\n\\end{lemma}\n\\begin{proof}\n Let $v$ be a vertex of $G$. Let us state simple observations which follow from the discharging rules, and will be used frequently in the remainder.\n \\begin{obs}\n Vertex $v$ sends nothing to incident non-triangular segments.\n \\end{obs}\n \\begin{obs}\n Vertex $v$ sends at most $\\tfrac12$ to an incident triangular segment.\n \\end{obs}\n \\begin{obs}\n Vertex $v$ sends at most $\\tfrac13$ to an incident triangular segment with no $5^-$ vertices.\n \\end{obs}\n\n In what follows we consider cases depending on the degree of $v$.\n \\\\\\\\ Assume ${d(v) \\leq 1}$.\n By Corollary \\ref{lem:no1degree} there are no vertices of this kind in $G$.\n \\\\\\\\ Assume ${d(v) = 2}$.\n The initial charge of $v$ is $-2$. By \\rxrule{r:degree2} $v$ receives $1$ from each of the two neighbours, and since it does not send any charge, its final charge is $0$.\n \\\\\\\\ Assume ${d(v) = 3}$.\n The initial charge of $v$ is $-1$. By \\rxrule{r:degree3} $v$ receives $\\tfrac13$ from each of the three neighbours, and since it does not send any charge, its final charge is $0$.\n \\\\\\\\ Assume ${d(v) = 4}$.\n The initial charge and the final charge of $v$ are equal to $0$, as it does not send or receive any charge.\n \\\\\\\\ Assume ${d(v) = 5}$.\n The initial charge of $v$ is $1$. Vertex $v$ sends charge only to incident triangular segments by \\rxrule{r:triangle}. Thus, each incident segment to $v$ receives at most $\\tfrac15$ from $v$. Hence, $v$ sends at most $5\\cdot \\tfrac15 = 1$ and its final charge is non-negative.\n \\\\\\\\ Assume ${d(v) = 6}$.\n The initial charge of $v$ is $2$. Vertex $v$ sends charge only to incident triangular segments by \\rxrule{r:triangle}. Thus, each segment incident to $v$ receives at most $\\tfrac13$ from $v$. Hence, $v$ sends at most $6\\cdot \\tfrac13 = 2$ and its final charge is non-negative.\n \\\\\\\\ Assume ${d(v) = 7}$.\n The initial charge of $v$ is $3$. Vertex $v$ sends charge only to incident triangular segments by \\rxrule{r:triangle}.\n We claim that $v$ is incident to at most two triangular segments such that each contains a $4$-vertex. Indeed, if $v$ is incident to a triangular segment containing a $4$-vertex, $v$ cannot have another $4$-neighbour as the configuration \\rxconf{c:kite} is excluded. Hence every such segment contains edge $vx$, and there are at most two segments with this property.\n\n By \\rxrule{r:triangle} $v$ sends $\\tfrac12$ to triangular segments containing a $4$-vertex and at most $\\tfrac25$ to each of the remaining segments (by \\rxconf{c:smalltriangles} the triangular segments $(7,6,5)$ are excluded). Thus, $v$ sends at most $2\\cdot \\tfrac12 + (7-2)\\tfrac25 = 3$ to the segments, so its final charge is non-negative.\n \\\\\\\\ Assume ${d(v) = 8}$.\n \\input{deg8}\n \\\\\\\\ Finally, assume ${d(v) = 9}$.\n \\input{deg9}\n\\end{proof}\n\nBy lemmas \\ref{l:facescharge} and \\ref{l:vertexcharge} every face and every vertex has non-negative final charge. Since the total charge does not change when the discharging rules are applied, we obtain that $\\sum_{v \\in V} (d(v)-4)+\\sum_{f \\in F}(\\ell(f)-4) \\ge 0$, thus $4|E|-4|V|-4|F|\\ge 0$. However, by Euler's formula $|E|-|V|-|F|=-2$, a contradiction. It follows that the minimal counterexample does not exist, and thus we have proved Theorem~\\ref{th:main}.\n\n\n\\refstepcounter{rulecnt}\\label{ruleFinal}\n\\refstepcounter{confcnt}\\label{confFinal}\n\n\\input{computer}\n\n\\section{Conclusion}\\label{sect:concl}\n\nWe have shown that Conjecture~\\ref{con:new} is true for $\\Delta=9$ but the $\\Delta=7$ is still open. This would be an interesting strengthening of the result of Sanders and Zhao~\\cite{DBLP:journals\/jct\/SandersZ01a} who proved that planar graphs maximum degree 7 are Class I. We were unable to extend our approach to $\\Delta=7$ and we suppose that for that a different argument, of a more global nature, is needed. Indeed, when $\\Delta=7$ we have less charge at a vertex and we quickly arrive at configurations with elements of negative charge that cannot be reduced using the standard proof generated by our program. Of course it is possible that this can be circumvented by a smart set of discharging rules, but it is worth noting that such a discharging proof is not known for edge coloring (which is a less constrained problem) even for $\\Delta=8$.\n\nThe Planar Linear Arboricity Conjecture (see Cygan et al.~\\cite{CyganHKLW12}) is still open. However, in the spirit of this work one can consider its weaker versions.\nFor example, what is the maximum $k$ such that edges of a planar graph of maximum degree 8 can be partitioned into $k$ linear forests and $8-2k$ matchings? At the moment, by Vizing Theorem we know that $k\\ge 0$, but even the case $k=1$ has not been investigated.\n\n\\section*{Acknowledgments}\nThe authors are grateful to Marek Cygan who implemented an early version of the computer program for checking reducibility in the context of linear arboricity. We thank Arkadiusz Soca\u0142a for discussions in early stage of the project. We are also very grateful to the reviewers for careful reading and numerous helpful comments. \n\nThis research is a part of projects that have received funding from the European Research Council (ERC)\nunder the European Union's Horizon 2020 research and innovation programme Grant Agreements 677651 (\u0141K, project TOTAL) and 948057 (\u0141K, MP, project BOBR).\nJC and \u0141K were also supported by the Polish Ministry of Science and Higher Education project Szko\u0142a Or\u0142\u00f3w, project number 500-D110-06-0465160.\n\n\\bibliographystyle{plainurl}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nWe are interested in the third equation of the integrable Benjamin-Ono hierarchy on the torus\n\\begin{equation}\\label{eq:bo4}\n\\partial_tu=\\partial_x\\left(-\\partial_{xx}u-\\frac{3}{2}uH\\partial_x u-\\frac{3}{2}H(u\\partial_x u)+u^3\\right).\n\\end{equation}\nThe operator $H$ is the Hilbert transform, defined as\n\\[\nH f(x)=\\sum_{n\\in\\mathbb{Z}\\setminus{0}}-i\\,\\mathrm{sgn}(n)\\widehat{f}(n)\\e^{inx},\n\t\\quad f=\\sum_{n\\in\\mathbb{Z}}\\widehat{f}(n)\\e^{inx},\n\t\\quad \\widehat{f}(n)=\\frac{1}{2\\pi}\\int_0^{2\\pi} f(x)\\e^{-inx}\\d x.\n\\]\n\n\\subsection{Benjamin-Ono equations and integrability\n\nThe Benjamin-Ono equation on the torus\n\\[\n\\partial_t u=H \\partial_{xx}u-\\partial_x(u^2),\n\\]\nwas introduced by Benjamin \\cite{Benjamin1967} and Ono \\cite{Ono1977} in order to describe long internal waves in a two-layer fluid of great depth.\nThis equation admits an infinite number of conserved quantities $\\mathcal{H}_k,k\\geq 1$ (see Nakamura \\cite{Nakamura1979} for a proof on the real line). The evolution equations associated to the conservation laws\n\\begin{equation}\\label{eq:BOhierarchy}\n\\partial_t u=\\partial_x(\\nabla \\mathcal{H}_k(u))\n\\end{equation}\nare the equations for the Benjamin-Ono hierarchy \\cite{Matsuno1984}.\n\nFrom Nakamura \\cite{Nakamura1979-2} and Bock, Kruskal \\cite{BockKruskal1979}, we know that the Benjamin-Ono equation admits a Lax pair\n\\[\n\\frac{\\d}{\\d t}L_u=[B_u,L_u],\n\\]\n\\[\nL_u=Dh-T_u,\t\\quad B_u=iD^2+2iT_{D(\\Pi u)}-2iDT_u.\n\\]\nHere, $ D=-i\\partial_x$ and $T_u$ is the Toeplitz operator on the Hardy space\n\\[\nL^2_+(\\mathbb{T})=\\{h\\in L^2(\\mathbb{T})\\mid \\forall n<0,\\quad \\widehat{h}(n)=0\\}\n\\]\ndefined as\n\\[\nT_u:h\\in L^2_+(\\mathbb{T})\\mapsto \\Pi(uh)\\in L^2_+(\\mathbb{T}),\n\\]\nand $\\Pi:L^2(\\mathbb{T})\\to L^2_+(\\mathbb{T})$ is the Szeg\\H{o} projector.\nThe Hamiltonians $\\mathcal{H}_k(u)$ are defined from the Lax operator $L_u$ as\n\\begin{equation}\\label{eq:Hk_Lax}\n\\mathcal{H}_k(u)=\\langle L_u^{k}\\mathds{1}|\\mathds{1}\\rangle.\n\\end{equation}\nIn particular, the Hamiltonian for equation \\eqref{eq:bo4} is\n\\begin{equation}\\label{eq:hamilton4}\n\\mathcal{H}_4(u)+\\frac{1}{2} \\mathcal{H}_2(u)^2=\\frac{1}{2\\pi}\\int_0^{2\\pi} \\left( \\frac{1}{2} (\\partial_x u)^2-\\frac{3}{4}u^2H\\partial_xu+\\frac{1}{4}u^4\\right)\\d x.\n\\end{equation}\n\nIn \\cite{GerardKappeler2019}, G\u00e9rard and Kappeler constructed global Birkhoff coordinates for the Benjamin-Ono equation on the torus. In these coordinates, the evolution equations for the Benjamin-Ono hierarchy are easier to understand. Indeed, denote by $\\Phi$ the Birkhoff map\n\\[\n\\Phi:u\\in L^2_{r,0}(\\mathbb{T})\\mapsto (\\zeta_n(u))_{n\\geq1}\\in h^{\\frac{1}{2}}_+,\n\\]\nwhere $L^2_{r,0}(\\mathbb{T})$ is the subspace of real valued functions in $L^2(\\mathbb{T})$ with zero mean, and \n\\[\nh^{\\frac{1}{2}}_+\n\t=\\Big\\{(\\zeta_n)_{n\\geq 1}\\mid \\sum_{n\\geq 1}n|\\zeta_n|^2<+\\infty\\Big\\}.\n\\]\nThen in the Birkhoff coordinates, equation \\eqref{eq:BOhierarchy} of the hierarchy associated to $\\mathcal{H}_k$ \nbecomes\n\\[\n\\partial_t \\zeta_n=i\\omega^{(k)}_n\\zeta_n,\n\t\\quad n\\geq 1\n\\]\nwhen the frequencies\n\\[\n\\omega^{(k)}_n\n\t=\\frac{\\partial (\\mathcal{H}_k\\circ\\Phi^{-1})}{\\partial |\\zeta_n|^2}\n\\]\nare well-defined. For instance, this formula is valid if the sequence $\\zeta(0)=(\\zeta_n(0))_{n\\geq 1}$ only has a finite number of nonzero terms, or in other words, if $\\Phi^{-1}(\\zeta(0))$ is a finite gap potential.\nIn this case, the frequencies $\\omega^{(k)}_n$ only depend on the actions $|\\zeta_p|^2$, and the evolution simply reads\n\\[\n\\zeta_n(t)=\\zeta_n(0)\\e^{i\\omega^{(k)}_n(\\zeta(0))t},\n\t\\quad t\\in\\mathbb{R},\n\t\\quad n\\geq 1.\n\\]\nFor the third equation of the hierarchy \\eqref{eq:bo4}, the frequencies write\n\\begin{equation}\\label{eq:omega_n^4}\n\\omega_n^{(4)}(\\zeta)\n\t=n^3+n\\sum_{p\\geq 1}p|\\zeta_p|^2-3\\sum_{p\\geq1}\\min(p,n)^2|\\zeta_p|^2+3\\sum_{p,q\\geq 1}\\min(p,q,n)|\\zeta_p|^2|\\zeta_q|^2.\n\\end{equation}\nMore details about the frequencies $\\omega_n^{(k)}$ and formula \\eqref{eq:omega_n^4} can be found in Appendix~\\ref{part:hierarchy}.\n\nWe refer to Saut \\cite{Saut2018} for a detailed survey of the Benjamin-Ono equation and of its hierarchy.\n\n\n\\subsection{Main results}\n\nOur first main result is the determination the well-posedness threshold for the third order Benjamin-Ono equation. For $s\\in\\mathbb{R}$, we use the notation \n\\[\nH^{-s}_{r,0}(\\mathbb{T})=\\{u\\in H^s(\\mathbb{T},\\mathbb{R})\\mid \\langle u|\\mathds{1}\\rangle =0\\}.\n\\]\nWe prove that the flow map is globally $\\mathcal{C}^0$-well-posed (in the sense of Definitions 1 and 2 from \\cite{GerardKappelerTopalov2019}) in $H^{s}_{r,0}(\\mathbb{T})$ when $s\\geq 0$, but is not globally $\\mathcal{C}^0$-well-posed in $H^{-s}_{r,0}(\\mathbb{T})$ when $00$, but is not weakly sequentially continuous in $L^2_{r,0}(\\mathbb{T})$.\n\\end{thm}\n\n\n\nIn \\cite{GerardKappelerTopalov2019}, G\u00e9rard, Kappeler and Topalov proved that the flow map for the Benjamin-Ono equation is globally $\\mathcal{C}^0$-well-posed in $H^{s}_{r,0}(\\mathbb{T})$ for $s>-\\frac{1}{2}$, whereas from \\cite{PavaHakkaev2010} there is no continuous extension of the flow map to $H^s_{r,0}(\\mathbb{T})$ when $s<-\\frac{1}{2}$. We expect that the well-posedness threshold on the torus increases by $\\frac{1}{2}$ for each new equation in the hierarchy~: for the equation corresponding to the $k$-th Hamiltonian $\\mathcal{H}_k$, $k\\geq 4$, the threshold should be $H^{\\frac{k}{2}-2}_{r,0}(\\mathbb{T})$ (see Remarks \\ref{rk:GWP} and~\\ref{rk:WP}). Note that all the equations for the Benjamin-Ono hierarchy have critical Sobolev exponent $-\\frac{1}{2}$.\n\n\nLet us mention former approaches to the Cauchy problem for higher order Benjamin-Ono equations.\nTanaka \\cite{Tanaka2019} considered more general third order type Benjamin-Ono equations on the torus \n\\[\n\\partial_t u=\\partial_x(-\\partial_{xx}u-c_1uH\\partial_x u-c_2H(u\\partial_x u)+u^3),\n\\]\nand proved local well-posedness in $H^s(\\mathbb{T})$ for $s>\\frac{5}{2}$. He deduced global well-posedness in $H^s(\\mathbb{T})$, $s\\geq 3$ for the integrable case $c_1=c_2=\\frac{3}{2}$.\n\nOn the real line, Feng and Han \\cite{FengHan1996} proved local well-posedness in $H^s(\\mathbb{R})$, $s\\geq 4$ for the third equation of the Benjamin-Ono hierarchy \\eqref{eq:bo4}. Considering more general third order type Benjamin-Ono equations under the form\n\\[\n\\partial_t u-bH\\partial_{xx}u-a\\partial_{xxx}u=cv\\partial_xv-d\\partial_x(vH \\partial_x v+H(v\\partial_xv)),\n\\]\nLinares, Pilod and Ponce \\cite{LinaresPilodPonce2011} established local well-posedness in $H^s(\\mathbb{R})$, $s\\geq2$, then Molinet and Pilod \\cite{MolinetPilod2012} proved global well-posedness in $H^s(\\mathbb{R})$, $s\\geq 1$\n\nConcerning Benjamin-Ono equations of fourth order on the torus and on the real line, Tanaka \\cite{Tanaka2019-2} proved local well-posedness in $H^s$, $s>\\frac{7}{2}$ for a more general family of fourth order type Benjamin-Ono equations, and deduced global well-posedness in $H^s$, $s\\geq 4$ in the integrable case.\n\\newline\n\n\nOur second main result is the classification of the traveling waves for the third order Benjamin-Ono equation in $L^2_{r,0}(\\mathbb{T})$, i.e.\\@ the solutions to \\eqref{eq:bo4} under the form $u(t,x)=u_0(x+ct)$, $t\\in\\mathbb{R}$, $x\\in\\mathbb{T}$, $u_0\\in L^2_{r,0}(\\mathbb{T})$.\n\n\\begin{mydef}\nFor $N\\geq 1$, we say that $u\\in L^2_{r,0}(\\mathbb{T})$ is a $N$ gap potential if the set $\\{n\\geq 1\\mid \\zeta_n(u)\\neq 0\\}$, where $\\Phi(u)=(\\zeta_n(u))_{n\\geq1}$, is finite and of cardinality $N$.\n\\end{mydef}\n\\begin{thm}\nA potential $u_0\\in L^2_{r,0}(\\mathbb{T})$ defines a traveling wave for equation \\eqref{eq:bo4} if and only if\n\\begin{itemize}\n\\item either $u_0$ is a one gap potential ;\n\\item either $u_0$ is a two gap potential, and the two nonzero indexes $p0$, there exists $\\delta>0$ such that if $v$ is a solution to \\eqref{eq:bo4} with initial condition $v_0\\in L^2_{r,0}(\\mathbb{T})$ such that $\\|v_0-u_0\\|_{L^2(\\mathbb{T})}\\leq \\delta$, then\n\\[\n\\sup_{t\\in\\mathbb{R}}\\inf_{\\theta\\in\\mathbb{T}}\\|v(t)-u_0(\\cdot+\\theta)\\|_{L^2(\\mathbb{T})}\\leq \\varepsilon.\n\\]\n\\end{mydef}\n\n\\begin{thm}\nThe one gap traveling waves are orbitally stable, whereas the two gap traveling waves are orbitally unstable.\n\\end{thm}\n\nFor the Benjamin-Ono equation, Pava and Natali \\cite{PavaNatali2008} proved the orbital stability of the traveling wave solutions in $H^{\\frac{1}{2}}_{r,0}(\\mathbb{T})$. In \\cite{GerardKappelerTopalov2019}, G\u00e9rard, Kappeler and Topalov improved the orbital stability of these solutions to $H^{-s}_{r,0}(\\mathbb{T})$, $0\\leq s<\\frac{1}{2}$.\n\n\\paragraph{Plan of the paper} The paper is organized as follows. We first prove the well-posedness threshold for the third order Benjamin-Ono equation \\eqref{eq:bo4} in Section \\ref{part:GWP}. Finally, in Section \\ref{part:traveling_waves}, we classify the traveling wave solutions and study their orbital stability properties.\n\nIn Appendix \\ref{part:hierarchy}, we describe how to compute the Hamiltonians $\\mathcal{H}_k$ and frequencies $\\omega^{(k)}_n=\\frac{\\partial \\mathcal{H}_k\\circ\\Phi^{-1}}{\\partial |\\zeta_n|^2}$ in terms of the action variables $|\\zeta_p|^2$. In Appendix \\ref{part:appendix}, we retrieve the Hamiltonian and frequencies of the third order Benjamin-Ono equation (see formulas~\\eqref{eq:hamilton4} and \\eqref{eq:omega_n^4}) by starting from the definition \\eqref{eq:Hk_Lax} of the higher order Hamiltonians. In Appendix~\\ref{part:appendix_structure}, we provide an alternative proof of a result from \\cite{TzvetkovVisciglia2014} about the structure of the higher order Hamiltonians by using formula~\\eqref{eq:Hk_Lax}, which may be of independent interest.\n\n\\paragraph{Acknowledgements} The author would like to thank her PhD advisor Professor P.\\@ G\u00e9rard for introducing her to this problem, and for his continuous support and advices.\n\n\n\\section{Well-posedness threshold for the fourth Hamiltonian}\\label{part:GWP}\n\nLet $N\\in\\mathbb{N}$ and let $\\mathcal{U}_N$ be the set\n\\[\n\\mathcal{U}_N\n\t=\\{u\\in L^2_{r,0}(\\mathbb{T})\\mid \\zeta_N(u)\\neq 0 ,\\quad \\zeta_j(u)=0 \\quad\\forall j>N\\}.\n\\]\nWe know from \\cite{GerardKappeler2019}, Theorem 3, that the restriction of the Birkhoff map $\\Phi$ to $\\mathcal{U}_N$ is a real analytic diffeomorphism onto some Euclidean space. In Birkhoff coordinates, the evolution along the flow of equation \\eqref{eq:bo4} for an initial data $u_0\\in\\mathcal{U}_N$ writes\n\\[\\begin{cases}\n\\partial_t\\zeta_n=i\\omega_n^{(4)}(u_0)\\zeta_n\n\\\\\n\\zeta_n(0)=\\zeta_n(u_0)\n\\end{cases},\n\t\\quad n\\geq 1,\n\\]\nwhere for all $n\\geq 1$, the frequencies $\\omega_n^{(4)}(u_0)$ are given by \\eqref{eq:omega_n^4}\n\\begin{align*}\n\\omega_n^{(4)}(u_0)\n\t=n^3+n\\sum_{p\\geq 1}p|\\zeta_p(u_0)|^2-3\\sum_{p\\geq1}\\min(p,n)^2|\\zeta_p(u_0)|^2+3\\sum_{p,q\\geq 1}\\min(p,q,n)|\\zeta_p(u_0)|^2|\\zeta_q(u_0)|^2.\n\\end{align*}\nThis implies that\n\\[\n\\zeta_n(u(t))=\\zeta_n(u_0)\\e^{i\\omega_n^{(4)}(u_0)t},\n\t\\quad t\\in\\mathbb{R}, \\quad n\\geq1.\n\\]\nTherefore, for any finite gap inifial data $u_0$, belonging to some of the sets $\\mathcal{U}_N$, the flow map $\\mathcal{S}^t:u_0\\in\\mathcal{U}_N\\mapsto u(t)\\in\\mathcal{U}_N$ is well-defined.\n\nIn part~\\ref{subsection:GWP}, we prove that for all $t\\in\\mathbb{R}$, this flow map extends by continuity to $H^s_{r,0}(\\mathbb{T})$ for $s\\geq 0$. We also show that the extension is sequentially weakly continuous in $H^s_{r,0}(\\mathbb{T})$ for $s> 0$, but not in $L^2_{r,0}(\\mathbb{T})$. In part \\ref{subsection:ill_posed}, we prove that the flow map does not extend by continuity to $H^{-s}_{r,0}(\\mathbb{T})$ for $s>0$. This gives a threshold for the global $\\mathcal{C}^0$-well-posedness of the third order Benjamin-Ono equation in the sense of Definitions 1 and 2 from \\cite{GerardKappelerTopalov2019}.\n\n\n\n\\subsection{Well-posedness in \\texorpdfstring{$H^s_{r,0}(\\mathbb{T})$, $s\\geq 0$}{H^s, s>=0}}\\label{subsection:GWP}\n\n\n\\begin{prop}\\label{prop:GWP}\nLet $s\\geq 0$. For any $u_0\\in H^s_{r,0}(\\mathbb{T})$, there exists a continuous map $t\\in\\mathbb{R}\\mapsto \\mathcal{S}^t(u_0)=u(t)\\in H^s_{r,0}(\\mathbb{T})$ with $u(0)=u_0$ such that the following holds.\n\nFor any finite gap sequence $(u_0^k)_k$ converging to $u_0$ in $H^s_{r,0}(\\mathbb{T})$, for any $t\\in\\mathbb{R}$, $u_k(t)=\\mathcal{S}^t(u_0^k)$ converges to $u(t)$ in $H^s_{r,0}(\\mathbb{T})$ as $k$ goes to infinity.\n\nMoreover, the extension of the flow map $\\mathcal{S}:u_0\\in H^s_{r,0}(\\mathbb{T})\\mapsto (t\\mapsto u(t))\\in \\mathcal{C}(\\mathbb{R},H^s_{r,0}(\\mathbb{T}))$ is continuous.\n\\end{prop}\n\nRecall that\nfrom \\cite{GerardKappelerTopalov2019}, as mentioned in the proofs of Proposition 2 and Theorem 8, we have the following result. For $s\\geq 0$, the Birkhoff map $\\Phi$ defines a homeomorphism between $H^s_{r,0}(\\mathbb{T})$ and the space\n\\[\nh^{\\frac{1}{2}+s}_+\n\t=\\Big\\{(\\zeta_n)_{n\\geq 1}\\mid \\sum_{n\\geq 1}n^{1+2s}|\\zeta_n|^2<+\\infty\\Big\\}.\n\\]\nThe proof of Proposition \\ref{prop:GWP} therefore relies on the following sequential convergence result obtained after applying the Birkhoff map.\n\n\n\\begin{lem}\\label{lem:convergence_zeta}\nFix $s\\geq0$. Let $\\zeta^k=(\\zeta^k_n)_{n\\geq 1}$, $k\\in\\mathbb{N}$, and $\\zeta$ be elements of $h^{\\frac{1}{2}+s}_+$ such that\n\\(\n\\|\\zeta^k-\\zeta\\|_{h^{\\frac{1}{2}+s}_+}\\longrightarroww{k\\to+\\infty}{}0.\n\\)\nThen for all $t\\in\\mathbb{R}$,\n\\[\n\\|(\\zeta^k_n\\e^{i\\omega_n^{(4)}(\\zeta^k)t})_n-(\\zeta_n\\e^{i\\omega_n^{(4)}(\\zeta)t})_n\\|_{h^{\\frac{1}{2}+s}_+}\\longrightarroww{k\\to+\\infty}{}0,\n\\]\nwhere the convergence is uniform on bounded time intervals.\n\\end{lem}\n\n\\begin{proof}\nNote that since $(\\zeta^k)_k$ converges to $\\zeta$ in $h^{\\frac{1}{2}+s}_+$, then for all $n\\in\\mathbb{N}$, formula \\eqref{eq:omega_n^4} for $\\omega_n^{(4)}(\\zeta^k)$ implies that $\\omega_n^{(4)}(\\zeta^k)$ converges to $\\omega_n^{(4)}(\\zeta)$ as $k$ goes to infinity.\n\nLet $\\varepsilon>0$. Fix $K\\in\\mathbb{N}$ such that for all $k\\geq K$,\n\\[\n\\|\\zeta^k-\\zeta\\|_{h^{\\frac{1}{2}+s}_+}\n\t\\leq \\varepsilon.\n\\]\nUsing that $\\zeta\\in h^{\\frac{1}{2}+s}_+$, fix $N\\in\\mathbb{N}$ such that\n\\[\n\\Big(\\sum_{n\\geq N}n^{1+2s}|\\zeta_n|^2\\Big)^{\\frac{1}{2}}\\leq\\varepsilon.\n\\]\n\nNow, if $k\\geq K$,\n\\begin{align*}\n\\|(\\zeta^k_n\\e^{i\\omega_n^{(4)}(\\zeta^k)t})_n-(\\zeta_n\\e^{i\\omega_n^{(4)}(\\zeta)t})_n\\|_{h^{\\frac{1}{2}+s}_+}\n\t&\\leq \\|(\\zeta^k_n)_n-(\\zeta_n)_n\\|_{h^{\\frac{1}{2}+s}_+}\n+\\|(\\zeta_n(\\e^{i\\omega_n^{(4)}(\\zeta^k)t}-\\e^{i\\omega_n^{(4)}(\\zeta)t}))_n\\|_{h^{\\frac{1}{2}+s}_+}\\\\\n\t&\\leq 3\\varepsilon+\\left(\\sum_{n=0}^{N-1}n^{1+2s}|\\zeta_n(\\e^{i\\omega_n^{(4)}(\\zeta^k)t}-\\e^{i\\omega_n^{(4)}(\\zeta)t})|^2\\right)^{\\frac{1}{2}},\n\\end{align*}\nwhich is less than $4\\varepsilon$ for $k$ large enough by convergence term by term of the elements in the sum. Moreover, this convergence is uniform on bounded time intervals.\n\\end{proof}\n\n\\begin{proof}[Proof of Proposition \\ref{prop:GWP}]\nLet $s\\geq 0$ and $u_0\\in H^s_{r,0}(\\mathbb{T})$. Fix $t\\in \\mathbb{R}$, and a sequence of finite gap initial data $(u_0^k)_k$ converging to $u_0$ in $H^s_{r,0}(\\mathbb{T})$. \n\nWe first establish that for all $t\\in\\mathbb{R}$, $(u_k(t))_k$ has a limit in $H^s_{r,0}(\\mathbb{T})$ as $k$ goes to $+\\infty$. By assumption, $\\Phi(u_0^k)$ converges to $\\Phi(u_0)$ in $h^{\\frac{1}{2}+s}_+$.\nDefine the sequence $\\zeta(t)$ by\n\\[\n\\zeta_n(t):=\\zeta_n(u_0)\\e^{i\\omega_n^{(4)}(u_0)t}, \\quad n\\in\\mathbb{N}.\n\\]\nLemma \\ref{lem:convergence_zeta} immediately implies that the sequence $(\\Phi(u_k(t)))_k$ converges to $\\zeta(t)$ in $h^{\\frac{1}{2}+s}_+$. Since $\\Phi^{-1}$ defines a continuous application from $h^{\\frac{1}{2}+s}_+$ to $H^s_{r,0}(\\mathbb{T})$, we deduce that $u_k(t)$ converges in $H^s_{r,0}(\\mathbb{T})$ to $u(t):=\\Phi^{-1}(\\zeta(t))$. Moreover, the convergence is uniform on bounded time intervals.\n\nWe now prove the continuity of the flow map $\\mathcal{S}^t$. Let $u_0^k\\in H^s_{r,0}(\\mathbb{T})$, $k\\in\\mathbb{N}$, be a sequence of initial data converging to some $u_0$ in $H^s_{r,0}(\\mathbb{T})$. Then $\\Phi(u_0^k)$ converges to $\\Phi(u_0)$ in $h^{\\frac{1}{2}+s}_+$, and the above Lemma \\ref{lem:convergence_zeta} again implies that $\\Phi(u_k(t))$ converges to $\\Phi(u(t))$ in $h^{\\frac{1}{2}+s}_+$. In other terms, $u_k(t)$ converges to $u(t)$ in $H^s_{r,0}(\\mathbb{T})$, where again this convergence is uniform on bounded intervals.\n\\end{proof}\n\n\\begin{cor}\nFor all $s>0$ and all $t\\in\\mathbb{R}$, the extension of the flow map restricted to $H^s_{r,0}(\\mathbb{T})$~: $u_0\\in H^s_{r,0}(\\mathbb{T})\\mapsto u(t)\\in H^s_{r,0}(\\mathbb{T})$ is sequentially weakly continuous.\n\\end{cor}\n\n\n\n\\begin{proof}\nLet $u_0^k\\in H^s_{r,0}(\\mathbb{T})$, $k\\in\\mathbb{N}$, be a sequence weakly converging in $H^s_{r,0}(\\mathbb{T})$ to $u_0\\in H^s_{r,0}(\\mathbb{T})$. Since the embedding $H^s_{r,0}(\\mathbb{T})\\hookrightarrow L^2_{r,0}(\\mathbb{T})$ is compact, $(u_0^k)_k$ is strongly convergent to $u_0$ in $L^2_{r,0}(\\mathbb{T})$. By continuity of the flow map $\\mathcal{S}^t$, one deduces that $(u_k(t))_k$ converges strongly to $u(t)$ in $L^2_{r,0}(\\mathbb{T})$. This implies that $(u_k(t))_k$ converges weakly to $u(t)$ in $H^s_{r,0}(\\mathbb{T})$.\n\\end{proof}\n\n\\begin{prop}\nFor all $t\\in\\mathbb{R}^*$, the extension of the flow map restricted to $L^2_{r,0}(\\mathbb{T})$~: $u_0\\in L^2_{r,0}(\\mathbb{T})\\mapsto u(t)\\in L^2_{r,0}(\\mathbb{T})$ is not sequentially weakly continuous.\n\\end{prop}\n\n\\begin{proof}\nFix $t\\in\\mathbb{R}^*$ and $u_0\\in L^2_{r,0}(\\mathbb{T})\\setminus\\{0\\}$. We construct a sequence $(u_0^k)_k$ in $L^2_{r,0}(\\mathbb{T})$ weakly convergent to $u_0$ in $L^2_{r,0}(\\mathbb{T})$ but such that $u_k(t)=\\mathcal{S}^t(u_0^k)$ is not weakly convergent to $u(t)=\\mathcal{S}^t(u_0)$ in $L^2_{r,0}(\\mathbb{T})$.\n\nLet $\\alpha>0$ to be chosen later. For $k\\in\\mathbb{N}$, we choose $(\\zeta_p(u_0^k))_p$ converging weakly to $(\\zeta_p(u_0))_p$ in $h^{\\frac{1}{2}}_+$ (so that $u_0^k$ converges weakly to $u_0$ in $L^2_{r,0}(\\mathbb{T})$) and such that\n\\[\n|\\zeta_p(u_0^k)|^2=|\\zeta_p(u_0)|^2+\\frac{\\alpha}{p}\\delta_{k,p},\n\t\\quad p\\geq 1.\n\\]\nFor instance, for $p\\neq k$ we choose $\\zeta_p(u_0^k)=\\zeta_p(u_0)$, and for $p=k$, we choose $\\zeta_k(u_0^k)=\\sqrt{|\\zeta_k(u_0)|^2+\\frac{\\alpha}{k}}\\frac{\\zeta_k(u_0)}{|\\zeta_k(u_0)|}$ if $\\zeta_k(u_0)\\neq 0$ and $\\zeta_k(u_0^k)=\\frac{\\alpha}{k}$ if $\\zeta_k(u_0)= 0$.\n\nFix $t\\neq 0$. If $u_k(t)$ was weakly convergent to $u_0$ in $L^2_{r,0}(\\mathbb{T})$, then $(\\zeta_p(u_k(t))_p$ would converge weakly to $(\\zeta_p(u(t))_p$ in $h^{\\frac{1}{2}}_+$, and therefore component by component :\n\\[\n\\zeta_p(u_0^k)\\e^{i\\omega^{(4)}_p(u_0^k)t}\n\t\\longrightarroww{k\\to+\\infty}{} \\zeta_p(u_0)\\e^{i\\omega^{(4)}_p(u_0)t},\n\t\\quad p\\geq 1.\n\\]\nIn particular, let $p\\geq1$ such that $\\zeta_p(u_0)\\neq 0$. Then there exists a sequence $(n_k)_k$ of integers such that\n\\[\n\\omega^{(4)}_p(u_0^k)t+2\\pi n_k\n\t\\longrightarroww{k\\to+\\infty}{} \\omega^{(4)}_p(u_0)t.\n\\]\nFrom the expression \\eqref{eq:omega_n^4} of $\\omega^{(4)}_p(u_0^k)$ and the strong convergence of $(\\zeta_p(u_0^k))_p$ to $(\\zeta_p(u_0))_p$ in $\\ell^2_+=\\{(\\zeta_p)_{p\\geq 1}\\mid \\sum_{p\\geq1}|\\zeta_p|^2<+\\infty\\}$ by compactness, we get\n\\[\n\\sum_{p=1}^{+\\infty}p|\\zeta_p(u_0)|^2+\\alpha+\\frac{2\\pi n_k}{t}\n\t=\\sum_{p=1}^{+\\infty}p|\\zeta_p(u_0^k)|^2+\\frac{2\\pi n_k}{t}\n\t\\longrightarroww{k\\to+\\infty}{} \\sum_{p=1}^{+\\infty}p|\\zeta_p(u_0)|^2.\n\\] \nWe get a contradiction by choosing $\\alpha\\not\\in \\frac{2\\pi}{t}\\mathbb{Z}$.\n\\end{proof}\n\\subsection{Ill-posedness in \\texorpdfstring{$H^{-s}_{r,0}(\\mathbb{T})$, $s>0$}{H^{-s}, s>0}}\\label{subsection:ill_posed}\n\n\n\\begin{prop\nFor all $t>0$, there is no continuous local extension of the flow map $\\mathcal{S}^t$ to $H^{-s}_{r,0}(\\mathbb{T})$ for $00$. If there was a local extension of the flow map $\\mathcal{S}^t$ in the distribution sense, then $u_k(t)$ would be weakly convergent to $u(t)$ in $H^{-s}_{r,0}(\\mathbb{T})$. Applying the Birkhoff map, which is weakly sequentially continuous (see \\cite{GerardKappelerTopalov2019}, Theorem 6), $\\Phi(u_k(t))$ would converge weakly to $\\Phi(u(t))$ in $h^{\\frac{1}{2}-s}_+$. In particular, for all $n$,\n\\begin{equation}\\label{eq:limitv2}\n\\zeta_n(v_k(t))\\e^{i\\tau_k n t}\n\t=\\zeta_n(u_k(t))\n\t\\longrightarroww{k\\to+\\infty}{} \\zeta_n(u(t)).\n\\end{equation}\n\nWe deduce from \\eqref{eq:limitv1} and \\eqref{eq:limitv2} that if $\\zeta_n(u_0)\\neq 0$, then\n\\begin{equation}\\label{eq:limitv3}\n\\e^{i\\tau_k n t}\\longrightarroww{k\\to+\\infty}{} \\frac{\\zeta_n(u(t))}{\\zeta_n(u_0)}\\e^{-i\\widetilde{\\omega_n}(u_0)t}.\n\\end{equation}\nWe construct the sequence $(u_0^k)_k$ in order to contradict this latter point. Let $n\\in \\mathbb{N}$ such that $\\zeta_n(u_0)\\neq 0$. Fix $k\\in \\mathbb{N}$. From the fact that $u_0$ does not belong to $L^2_{r,0}(\\mathbb{T})$,\n\\[\n\\sum_{p> k}p|\\zeta_p(u_0)|^2=+\\infty,\n\\]\ntherefore one can choose $N_k\\geq k+1$ such that\n\\[\n\\sum_{p=k+1}^{N_k}p|\\zeta_p(u_0)|^2\\geq \\frac{2\\pi}{nt}.\n\\]\nLet $0<\\alpha_k<1$ such that there exists an integer $m_k$ such that\n\\[\n\\sum_{p\\leq k}p|\\zeta_p(u_0)|^2+\\alpha_k\\sum_{p=k+1}^{N_k}p|\\zeta_p(u_0)|^2\n\t=\\frac{1}{nt}(k\\pi+2\\pi m_k).\n\\]\n\nWe define $u_0^k$ by\n\\[\n\\zeta_p(u_0^k)\n\t=\n\\begin{cases}\n\\zeta_p(u_0) \\text{ if } p\\leq k\n\\\\\n\\sqrt{\\alpha_k}\\zeta_p(u_0) \\text{ if } k0$ and $\\gamma_q>0$. Then $u_0$ is a traveling wave for equation \\eqref{eq:bo4} if and only if\n\\[\n0< \\gamma_p< \\frac{1}{2}\\left(p+\\sqrt{p^2+4q\\frac{p+q}{3}}\\right)\n\\]\nand\n\\[\n\\gamma_q=\\frac{q\\frac{p+q}{3}+p\\gamma_p-\\gamma_p^2}{2\\gamma_p+q}.\n\\]\n\\end{prop}\n\n\\begin{proof}\nLet $u_0$ be such a two gap potential. Then $u_0$ is a traveling wave if and only if $\\frac{\\omega_p^{(4)}(u_0)}{p}=\\frac{\\omega_q^{(4)}(u_0)}{q}$, where\n\\[\n\\frac{\\omega_p^{(4)}(u_0)}{p}\n\t=p^2+(p\\gamma_p+q\\gamma_q)-3p(\\gamma_p+\\gamma_q)+3(\\gamma_p^2+2\\gamma_p\\gamma_q+\\gamma_q^2)\n\\]\nand\n\\[\n\\frac{\\omega_q^{(4)}(u_0)}{q}\n\t=q^2+(p\\gamma_p+q\\gamma_q)-3(\\frac{p^2}{q}\\gamma_p+q\\gamma_q)+3(\\frac{p}{q}\\gamma_p^2+2\\frac{p}{q}\\gamma_p\\gamma_q+\\gamma_q^2).\n\\]\nTaking the difference of the two terms, $\\frac{\\omega_p^{(4)}(u_0)}{p}=\\frac{\\omega_q^{(4)}(u_0)}{q}$, if and only if\n\\[\n0=q^2-p^2+3\\left(-p(\\frac{p}{q}-1)\\gamma_p-(q-p)\\gamma_q+(\\frac{p}{q}-1)\\gamma_p^2+2(\\frac{p}{q}-1)\\gamma_p\\gamma_q\\right).\n\\]\nDividing by $3(1-\\frac{p}{q})$, this necessary and sufficient condition becomes\n\\[\n0=q\\frac{p+q}{3}+p\\gamma_p-q\\gamma_q-\\gamma_p^2-2\\gamma_p\\gamma_q,\n\\]\ni.e.\n\\begin{equation}\\label{eq:twogap1}\n(2\\gamma_p+q)\\gamma_q\n\t=q\\frac{p+q}{3}+p\\gamma_p-\\gamma_p^2.\n\\end{equation}\nFix $\\gamma_p>0$ and $\\gamma_q$ satisfying this latter equality. We get that $\\gamma_q>0$ if and only if the left-hand side of the equality is positive, i.e.\n\\begin{equation}\\label{eq:twogap2}\n0< \\gamma_p< \\frac{1}{2}\\left(p+\\sqrt{p^2+4q\\frac{p+q}{3}}\\right).\n\\end{equation}\nConversely, any two gap solution $u_0$ satisfying \\eqref{eq:twogap1} and \\eqref{eq:twogap2} verifies $\\frac{\\omega_p^{(4)}(u_0)}{p}=\\frac{\\omega_q^{(4)}(u_0)}{q}$, therefore is a traveling wave solution.\n\\end{proof}\n\nLet us give an idea of the form of a two gap potential $u_0$ with gaps at indices $p0$, necessarily\n\\begin{equation}\\label{ineq:gammaq-gammar}\n\\gamma_q+\\gamma_r<\\frac{p+q}{3}.\n\\end{equation}\n\n\nHowever, the second equality \\eqref{eq:pr-pq} implies that\n\\begin{align*}\n\\gamma_r\n\t&=\\frac{(r-p)\\frac{r+q+p}{3}+(p+q)\\gamma_q-\\gamma_q^2}{r-p+2\\gamma_q}\\\\\n\t&=\\frac{(r-p)\\frac{q+p}{3}+(p+q)\\gamma_q-\\gamma_q^2}{r-p+2\\gamma_q}+\\frac{(r-p)\\frac{r}{3}}{r-p+2\\gamma_q}\\\\\n\t&=\\frac{p+q}{3}+\\frac{-2\\gamma_q\\frac{p+q}{3}+(p+q)\\gamma_q-\\gamma_q^2}{r-p+2\\gamma_q}+\\frac{(r-p)\\frac{r}{3}}{r-p+2\\gamma_q}\\\\\n\t&=\\frac{p+q}{3}+\\gamma_q\\frac{\\frac{p+q}{3}-\\gamma_q}{r-p+2\\gamma_q}+\\frac{(r-p)r}{3(r-p+2\\gamma_q)}.\\\\\n\\end{align*}\nSince from \\eqref{ineq:gammaq-gammar},\n\\[\n\\gamma_q<\\frac{p+q}{3},\n\\]\nwe get from this latter equality for $\\gamma_r$ that\n\\[\n\\gamma_r>\\frac{p+q}{3},\n\\]\nbut this is a contradiction with \\eqref{ineq:gammaq-gammar}.\n\\end{proof}\n\n\\begin{cor}\nThere are no $N$ gap traveling waves for $N\\geq 3$.\n\\end{cor}\n\n\\begin{proof}\nThe proof is the same as for the three gap traveling waves case, but with some additional terms which might hinder understanding for a first reading. We explain here how to adapt the proof.\n\nLet $u_0$ be a $N$ gap potential, $N\\geq 3$, and $p5,\\quad\n\\alpha(x)=\\alpha_0<1\\quad \\hbox{for} \\quad -50$. Similarly, the dominant\nexponential in the limit $x\\rightarrow -\\infty$ is \n$\\exp\\(\\sum_i \\Gamma_i\\)$, where the sum involves all $\\Gamma_i$s\nsuch that ${\\rm Re}\\(z_i+\\frac{1}{z_i}\\)<0$. Thus, the energy\n depends only on the modulus of ${\\rm Re}\\(z_i+\\frac{1}{z_i}\\)$. \nThe parameters $z_i$ can be\ncomplex for some solutions and so, writing them as $z_i\\equiv\ne^{-\\alpha_i+i\\theta_i}$, one gets\n\\begin{equation}\nz_i+\\frac{1}{z_i}=2\n\\left[\\cos\\theta_i\\,\\cosh\\alpha_i-i\\,\\sin\\theta_i\\,\\sinh\\alpha_i\\right]. \n\\end{equation}\nThus we conclude that the energy (\\ref{resu}) for the solutions\n(\\ref{nsolitonsolution}), is given by \n\\begin{equation}\nE = \\frac{8\\, m}{\\beta^2}\\; \\sum_{i=1}^N \n\\left[\\; \\mid\n \\cos\\theta_i\\mid\\,\\cosh\\alpha_i-i\\,\\sin\\theta_i\\,\\sinh\\alpha_i\\right], \n\\end{equation}\nand where $N$ corresponds to the $N$-soliton sector in which the solution lies. The\nenergy will then be real for some special choices of the parameters\n$z_i$. But whenever this happens, the energy is automaticaly positive. \nOf course, the energy is real whenever the solution $\\varphi$ is\nreal. \n\nThe energies for the solutions we consider in this paper are then given by:\n\\begin{enumerate}\n\\item For the $1$-soliton constructed in section \\ref{sec:onesol} \n\\begin{equation}\nE_{1{\\rm -soliton}} =\\frac{8\\, m}{\\beta^2}\\frac{1}{\\sqrt{1-v^2}}.\n\\end{equation}\n\\item For the breather constructed in section \\ref{sec:breather} \n\\begin{equation}\nE_{{\\rm breather}} =\\frac{16\\,\n m}{\\beta^2}\\frac{\\sqrt{1-\\omega^2}}{\\sqrt{1-v^2}}. \n\\end{equation}\n\\item For the wobble constructed in sections \\ref{sec:wobblemain} and\n \\ref{sec:wobbleapp} \n\\begin{equation}\nE_{{\\rm wobble}} =\\frac{8\\, m}{\\beta^2}\\frac{1}{\\sqrt{1-v_K^2}}+\n\\frac{16\\,\n m}{\\beta^2}\\frac{\\sqrt{1-\\omega^2}}{\\sqrt{1-v_B^2}}. \n\\end{equation}\n\\item For the solution of the kink with two breathers constructed in sections\n \\ref{sec:kink+twobreathers} and \\ref{sec:kinktwobreathers} (with their\n velocities set to zero)\n\\begin{equation}\nE_{{\\rm kink+2 breathers}} =\\frac{8\\, m}{\\beta^2}+\n\\frac{16\\,\n m}{\\beta^2}\\sqrt{1-\\omega_1^2} +\n\\frac{16\\,\n m}{\\beta^2}\\sqrt{1-\\omega_2^2}, \n\\end{equation}\nwhere $\\omega_i=\\sin \\theta_i$, $i=1,2$. \n\\end{enumerate}\n\n\n\n\n\n\n\n\n\n\n\n\\section{Final Remarks}\nIn this paper we have drawn the attention of the readers to the rich structure of solutions (of finite energy)\nof the Sine-Gordon model. Even in the one kink sector there are solutions involving, in addition to the kink,\nalso many breathers. As the energy of each breather depends on its frequency (and vanishes in the limit\nof this frequency going to 1) the extra energy, due to these extra breathers, does not have to be very large.\nThe solutions appear to be stable and this stability is guaranteed by the integrability of the model.\nWe have tested this numerically and have found that small perturbations, due to the discretisations, do not\nalter this stability. To change it we need something more drastic - like the absorption or the space variation \nof the potential ({\\it ie} the coefficient of the $sin$ term in the Lagrangian). But even then \nthe effects are not very large - one sees splitting of breathers {\\it etc} but no `global annihilation'.\n\nAt the same time we have looked at the total energy and the energy density of a general solution.\nWe point out that the total energy is determined by the asymptotic values of the fields\n({\\it ie} is given by (\\ref{resu})). This result has been known earlier (for kinks and antikinks)\nto people working in integrable and conformal field theories but may\nnot be known generally. We have given a general proof of this in a\nseparate, rather technical, paper \\cite{wojtek-luiz} \nbut we have also checked it explicitly for all field configurations\ninvolving up to 5 kinks (or antikinks). \n\nOur results do not answer, definitively, the question as to whether a\nkink possesses an internal mode \nof oscillation or not. The perturbation of the kink we performed in\nsection \\ref{sec:perturbedkink} did not produce any oscillatory\ninternal mode. On the contrary, all the energy given to the kink by the\nperturbation was used to produce breather like excitations which died\naway very slowly. In fact, the extremely slow decay of these\nexcitations indicate the difficulty of settling down the issue of the\nexistence of the internal mode. If simulations are not done very carefully\nand runned for very long times, then \nthis fact can lead to incorrect interpretations. \n\nAs we have pointed out, one can get oscillitory kink configurations by\nconstructing e\\-xact solutions corresponding to the stationary\nsupperposition of a kink \nand one or many breathers. Such a case of a kink and a breather was named\n`a wobble' by K\\\"alberman \\cite{Kalberman}. In the case of the wobble the\nfrequency of oscillation cannot be greater than $1$, and the energy of\nthe oscillation goes to zero as the frequency approaches $1$. Boesch\nand Willis \\cite{Boesch} claimed to have seen an oscillatory mode of\nthe kink just above the phonon band, {\\it i.e.} just above $1$. If that is\nso, the wobble does not correspond to that mode. It is true however,\nthat the frequency of the wobble can go above $1$ by a Lorentz\nboost. One has then to settle the issue of how precise the simulations\nof Boesch and Willis were to separate this effect. If one considers\nthe exact stationary supperposition of a kink and two breathers one\ncan get frequencies of oscillations greater than $1$, as shown in\n(\\ref{twobreathers}). However, this is in a context different from\nthat discussed in the literature where the considered frequencies are\nstudied in \nthe linear approximation. In addition, although our simulations have\nshown that the kink plus breathers are stable solutions against the\ndiscretisation, they can be pulled apart by scattering through a hole, as\nmentioned at the end of section \\ref{sec:wobblemain}. The fact that a kink\nsolution of the equations of motion \ncan involve many breathers makes the problem of the zero mode\ndifficult to resolve \nanalytically (and in the literature \nit is discussed only in the `linear approximation') and almost\nimpossible to resolve numerically. \nThe rich spectrum of the solutions and the appearance of many\nbreathers makes this task particularly \nhard to perform. It would be interesting to see whether these extra\nbreathers play a significant role \nin any physical applications of the model.\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe famous MacWilliams Extension Theorem states that each linear isometry of a linear code extends to a monomial map. The result was originally proved in the MacWilliams' Ph.D. thesis, see \\cite{macwilliams-phd61}. Later, the result was generalized for codes over modules. A summary is given below.\n\nLet $R$ be a ring with identity and let $A$ be a finite left $R$-module. Consider a module $A^n$ with the Hamming metrics and a code $C \\subseteq A^n$ that is a left $R$-submodule. For two $R$-modules $A$ and $B$ let $\\Hom_R(A,B)$ denote the set of $R$-module homomorphism from $A$ to $B$.\nCall a map $f \\in \\Hom_R(A^n,A^n)$ \\emph{monomial}, if there exist a permutation $\\pi \\in S_n$ and automorphisms $g_1,\\dots, g_n \\in \\Aut_R({A})$, such that, for any $a \\in {A}^n$,\n\\begin{equation*}\nf\\big((a_1,\\dots,a_n)\\big) = (g_1(a_{\\pi(1)}), \\dots, g_n(a_{\\pi(n)}))\\;.\n\\end{equation*}\n\nNote that a monomial map preserves the Hamming distance. It can be easily shown, that each isometry $f \\in \\Hom_R(A^n,A^n)$ is monomial.\nWe say that an alphabet $A$ has the \\emph{extension property} if for any positive integer $n$, for any code $C \\subseteq A^n$ each Hamming isometry $f \\in \\Hom_R(C, A^n)$ extends to a monomial map.\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\begin{tikzcd}\n\t\tA^n \\arrow{rd}{h}&\\\\\n\t\tC \\arrow[hook]{u}{\\iota} \\arrow[swap]{r}{f}& A^n \t\n\t\\end{tikzcd}\n\t\\caption{Extension property.}\n\t\\label{fig-extendibility}\n\\end{figure}\n\nClassical linear codes correspond to the case when $R = \\mathbb{F}_q$ and $A = R$, where $\\mathbb{F}_q$ is a finite field. As we mentioned before, the MacWilliams Extension Theorem states that the alphabet $A = R = \\mathbb{F}_q$ has the extension property.\n\nApparently, not every module alphabet satisfies the extension property.\nRecall the definition of a pseudo-injective module. A left $R$-module $A$ is called \\emph{pseudo-injective}, if for each left $R$-submodule $B \\subseteq A$ and for each two embeddings $\\phi,\\psi \\in \\Hom_R({B},{A})$ there exists an automorphism $h \\in \\Aut_R({A})$ such that $\\psi = h\\phi$.\n\\begin{figure}[!ht]\n\t\\centering\n\t\\begin{tikzcd}\n\t\tB \\arrow[hook]{r}{\\psi} \\arrow[hook, swap]{dr}{\\phi} & A\\\\\n\t\t& A \\arrow[swap]{u}{h} \t\n\t\\end{tikzcd}\n\t\\caption{Pseudo-injectivity of a module $A$.}\n\t\\label{fig-x-prop}\n\\end{figure}\nIn other words, $A$ is pseudo-injective if and only if any $R$-linear isomorphism between its submodules extends to an $R$-linear automorphism of $A$. Not all the $R$-modules are pseudo-injective.\n\\begin{example}\n\tConsider a $\\Z$-module $A = \\Z_2 \\oplus \\Z_4$. Let $M = \\langle (0,2)\\rangle$ and $N = \\langle (1,0)\\rangle$. Obviously, $M \\cong N$, with isomorphism $\\psi: (0,2) \\rightarrow (1,0)$, but there is no isomorphism $\\phi: A \\rightarrow A$ such that $\\phi = \\psi$ on $M$. So, $A$ is not pseudo-injective.\n\\end{example}\n\nRecall the \\emph{socle} of $A$ is a submodule $\\soc(A) \\subseteq A$ that is equal to the sum of all simple submodules of $A$. A module is called \\emph{simple} (or irreducible) if it does not contain any other submodules except zero and itself.\nIn \\cite{wood-foundations} the author proved a general extension theorem for a pseudo-injective module alphabet with a cyclic socle and showed that these conditions are maximal.\n\\begin{theorem}\\label{thm-wood-general-modules}\n\tIf $A$ is an alphabet that is pseudo-injective and $\\soc(A)$ cyclic, then $A$ has the extension property.\n\\end{theorem}\n\nIn the case, when $A = R$, in \\cite{dinh-lopez-1, greferath, wood-foundations} the authors proved the extension theorem for Frobenius rings and showed the maximality of the condition.\n\\begin{theorem}\\label{thm-frobenius-wood}\n\tLet $R$ be a Frobenius ring. Then the alphabet $A = R$ has the extension property.\n\\end{theorem}\n\nIn \\cite{dinh-lopez, dinh-lopez-1} the extension problem for arbitrary ring and alphabet was partially translated to the case of matrix rings and matrix modules.\nThere the authors proved the existence of a general counterexample for codes over a matrix module alphabet. An explicit construction appeared in \\cite{wood-foundations}.\n\\begin{theorem}[see \\cite{wood-foundations}]\\label{thm-wood-matrix-module}\n\tLet $R = \\M_{m}(\\mathbb{F}_q)$ be a ring of all $m \\times m$ matrices over a finite field $\\mathbb{F}_q$ and let $A = \\M_{m \\times k}(\\mathbb{F}_q)$ be a left $R$-module of all $m \\times k$ matrices over $\\mathbb{F}_q$.\n\t\n\tIf $k \\leq m$, then the alphabet $A$ has the extension property.\n\t\nIf $k > m$, there exist a linear code $C \\subset A^K$, $K = \\prod_{i=1}^{k-1} (1 + q^i)$, and a map $f \\in \\Hom_R(C,A^K)$ that is a Hamming isometry, but there is no monomial map extending $f$.\n\\end{theorem}\nNote that the theorem does not say if the given counterexample has minimum possible code length.\nIn this paper we improve \\Cref{thm-wood-matrix-module}. More precisely, in the context of matrix modules, we found the minimum code length for which an example of a code with unextendable isometry exists. It appear that such a code with minimal code length is similar to the code from \\Cref{thm-wood-matrix-module}, described in \\cite{wood-foundations}. In our previous work \\cite{d1} we found the precise bound for the case $m = 1$, which corresponds to linear codes over a vector space alphabet.\n\nOur main idea is to use a geometric approach. In \\Cref{thm-isometry-criterium} we describe an unextendable isometries in terms of nontrivial solution of the isometry equation (\\ref{eq-main-space-equation}), which is an equation of indicator functions of modules. In \\cite{d1} we observed basic properties of this equation for the case of vector spaces and here, in \\Cref{thm-matrix-module-bound} we describe some properties of the equation for matrix modules.\n\nIn \\Cref{thm-mds-extension-theorem} we prove that the extension property holds for MDS codes over a module alphabet, when the dimension of a code does not equal 2. Despite the general result of \\Cref{thm-wood-general-modules}, for MDS codes the extension theorem holds for arbitrary finite $R$-module alphabet.\n\n\\section{Extension criterium}\nLet $W$ be a left $R$-module isomorphic to $C$.\nLet $\\lambda \\in \\Hom_R(W, {A^n})$ be a map such that $\\lambda(W) = C$.\nPresent the map $\\lambda$ in the form $\\lambda = (\\lambda_1,\\dots, \\lambda_n)$, where $\\lambda_i \\in \\Hom_R(W,{A})$ is a projection on the $i$th coordinate, for $i \\in \\myset{n}$. Consider the following modules, for $i \\in \\myset{n}$,\n\\begin{equation*}\n\tV_i = \\Ker \\lambda_i \\subseteq W\\;.\n\\end{equation*}\n\nLet $f: C \\rightarrow A^n$ be a homomorphism of left $R$-modules.\nDefine $\\mu = f\\lambda \\in \\Hom_R(W,A^n)$ and denote\n\\begin{equation*}\nU_i = \\Ker \\mu_i \\subseteq W\\;.\n\\end{equation*}\n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\begin{tikzcd}\n\t\tW \\arrow{r}{\\lambda} \\arrow[swap]{rd}{\\mu} & C \\arrow{d}{f}\\\\\n\t\t& A^n \t\n\t\\end{tikzcd}\n\t\\caption{The maps $\\lambda$ and $\\mu$.}\n\t\\label{fig-parametriztion}\n\\end{figure}\n\nDenote the tuples of modules $\\V = (V_1, \\dots, V_n)$ and $\\U = (U_1, \\dots, U_n)$.\nWe say that $\\V = \\U$ if they represent the same multiset of modules. In other words, $\\V = \\U$ if and only if there exists $\\pi \\in S_n$ such that for each $i \\in \\myset{n}$, $U_i = V_{\\pi(i)}$. Recall the indicator function of a subset $Y$ of a set $X$ is a map $\\id_Y: X \\rightarrow \\{0,1\\}$, such that $\\id_Y(x) = 1$ if $x \\in Y$ and $\\id_Y(x) = 0$ otherwise.\n\n\\begin{proposition}\\label{thm-isometry-criterium}\n\tThe map $f \\in \\Hom_R (C,A^n)$ is a Hamming isometry if and only if the following equality holds,\n\t\\begin{equation}\\label{eq-main-space-equation}\n\t\\sum_{i=1}^n \\id_{V_i} = \\sum_{i=1}^n \\id_{U_i}\\;.\n\t\\end{equation}\t\n\tIf $f$ extends to a monomial map, then $\\V = \\U$.\n\tIf $A$ is pseudo-injective and $\\V = \\U$, then $f$ extends to a monomial map. \n\\end{proposition}\n\\begin{proof}\n\tProve the first part. By definition, the map $f$ is a Hamming isometry if for each $a \\in C$, $\\wt(f(a)) = \\wt(a)$, or, equivalently, $f$ is an isometry if and only if for each $w \\in W$, $\\wt(\\lambda(w)) = \\wt(\\mu(w))$. Note that\n\tfor any $w \\in W$, $n - \\wt(\\lambda(w)) = \\sum_{i=1}^{n} (1 - \\wt(\\lambda_i(w))) = \\sum_{i=1}^n \\id_{\\Ker \\lambda_i} (w)$ and the same for the map $\\mu$. Hence \\cref{eq-main-space-equation} holds.\n\t\n\tProve the second part. Consider any two maps $\\sigma, \\tau \\in \\Hom_R(W,A)$. \n\tIf there exists $g \\in \\Aut_R({A})$ such that $\\sigma = g\\tau$ then $\\Ker \\sigma = \\Ker \\tau$.\n\t\n\tLet $\\Ker \\sigma = \\Ker \\tau = N \\subseteq W$ and let $A$ be pseudo-injective. The corresponding homomorphisms defined on the quotient module $\\bar{\\sigma}, \\bar{\\tau}: W\/N \\rightarrow A$ are injective. Using the property of $A$, there exists $h\\in \\Aut_R(A)$ such that $\\bar{\\sigma} = h \\bar{\\tau}$. It is easy to check that $\\sigma = h \\tau$.\n\\end{proof}\n\nLet a pair of tuples of modules $(\\U,\\V)$ be a solution of \\cref{eq-main-space-equation}. If $\\U= \\V$ then we call the solution \\emph{trivial}. \\Cref{thm-isometry-criterium} gives a relation between trivial solutions of \\cref{eq-main-space-equation} and extendable isometries.\n\n\\begin{remark}As it was noted in \\cite{dinh-lopez-1} and \\cite{wood-foundations}, the property of pseudo-injectivity is necessary in the statement. Assuming that the alphabet is not pseudo-injective means that the extension property fails even if the length of a code is 1.\n\\end{remark}\n\n\\section{General extension theorem}\nIn this section we show how to use the approach from the previous section to prove \\Cref{thm-wood-general-modules}.\nRecall that a left $R$-module $M$ is called \\emph{cyclic} if there exists a generator element $x \\in M$, such that $M = Rx = \\{rx \\mid r \\in R \\}$.\n\\begin{lemma}\\label{lemma-cyclic-if-covering}\n\tAn $R$-module $M$ is not cyclic if and only if there exist submodules $\\{0\\} \\subset E_1, \\dots, E_r \\subset M$ such that $M = \\bigcup_{i=1}^r E_i$.\n\\end{lemma}\n\\begin{proof}\n\tAssume that there exists such a covering $M = \\bigcup_{i=1}^r E_i$ of $M$ by submodules and let $M$ be cyclic. For a generator $x \\in M$ there exists $i \\in \\myset{r}$ such that $x \\in E_i$ and thus $M = Rx \\subseteq E_i \\subset M$ that leads to a contradiction.\n\t\n\tIf $M$ is not cyclic, then for any $x \\in M\\setminus\\{0\\}$, $\\{0\\} \\subset Rx \\subset M$ and therefore $M = \\bigcup_{x \\in M\\setminus\\{0\\}} Rx$.\n\\end{proof}\n\n\\begin{lemma}\\label{lemma-nontrivial-iff-noncyclic}\n\tFor each non-cyclic module $M$ there exists a nontrivial solution of \\cref{eq-main-space-equation} with at least one module equals $M$.\n\tA solution of the equation\n\\begin{equation*}\n\\sum_{i = 1}^s a_i \\id_{V_i} = \\sum_{i = 1}^t b_i \\id_{U_i}\\;,\n\\end{equation*} \n\twith only cyclic modules is trivial, where all the coefficients are in $\\mathbb{C}$.\n\\end{lemma}\n\\begin{proof}\n\tProve the first part.\n\tLet $M = \\bigcup_{i=1}^r E_i$ be a nontrivial covering of $M$ by submodules. Denote $M_I = \\bigcap_{i \\in I} E_i$, where $I \\subseteq \\myset{r}$ and define $M_{\\emptyset} = M$. Use the inclusion-exclusion formula,\n\t\\begin{equation*}\n\t\\sum_{\\card{I} \\text{ is even}} \\id_{M_I} = \\sum_{\\card{I} \\text{ is odd}} \\id_{M_I}\\;,\n\t\\end{equation*}\n\twhere the summation is over all subsets $I \\subseteq \\myset{r}$. \n\tIt is easy to see that the resulting equation is nontrivial, for example, the module $M$ appears only from the left side. The number of terms on each side is the same and equals $2^{r - 1}$.\n\t\nProve the second part. Assume that there exists a nontrivial solution of the equation.\nWithout loss of generality, we can assume that the equation is simplified by eliminating equal terms and making a reindexing. Hence, $a_i,b_j \\neq 0$ for $i \\in \\myset{s}$, $j \\in \\myset{t}$ and all $V_i, U_i$ are different.\nSince the solution is nontrivial, $s,t>0$. Among the modules choose the maximal with respect to the inclusion, suppose it is $V_1$. Then $V_1 = \\bigcup_{i=1}^t (V_1 \\cap U_i)$, where $\\{0\\} \\subset V_1 \\cap U_i \\subset V_1$, $i \\in \\myset{t}$. From \\Cref{lemma-cyclic-if-covering}, the module $V_1$ is therefore non-cyclic, which contradicts to our assumption.\n\\end{proof}\n\n\\paragraph{Characters and Fourier transform.} \nDenote by $\\hat{A} = \\Hom_\\mathbb{Z}(A, \\mathbb{C}^*)$ the set of characters of $A$. The set $\\hat{A}$ has a natural structure of a right $R$-module.\nLet $A, W$ be two left $R$-modules. For a map $\\sigma \\in \\Hom_R({W},{A})$ define a map $\\hat{\\sigma}: \\hat{A} \\rightarrow \\hat{W}$, $\\chi \\mapsto \\chi \\sigma$. Note that $\\hat{\\sigma} \\in \\Hom_R(\\hat{A}_R,\\hat{W}_R)$. It is known that $\\wedge$ is an exact contravariant functor on the category of left(right) $R$-modules, see \\cite{wood-foundations}.\n\n\nLet $M$ be a left $R$-module. The Fourier transform of a map $f: M \\rightarrow \\mathbb{C}$ is a map $\\mathcal{F}(f): \\hat{M} \\rightarrow \\mathbb{C}$, defined as\n\\begin{equation*}\n\\mathcal{F}(f)(\\chi) = \\sum_{m \\in M} f(m)\\chi(m)\\;.\n\\end{equation*}\nIt can be easily proved that for a submodule $V \\subseteq M$, $\\mathcal{F}(\\id_V) = \\card{V} \\id_{V^\\perp}$, where an orthogonal module is defined as $V^\\perp = \\{ \\chi \\in \\hat{M} \\mid \\forall v \\in V, \\chi(v) = 1 \\} \\subseteq \\hat{M}$.\nNote that the Fourier transform is invertible, $V^{\\perp\\perp} \\cong V$ and for any $V, U \\subseteq W$, $(V \\cap U)^\\perp = V^\\perp + U^\\perp$. \n\nFor any $\\sigma \\in \\Hom_R(W,A)$, $\\Ker \\sigma = \\{w \\in W \\mid \\sigma(w) = 0 \\} = \\{ w \\in W \\mid \\forall \\chi \\in \\hat{A}, \\chi(\\sigma(w)) = 1 \\} = (\\Img \\hat{\\sigma})^\\perp$, and thus $(\\Ker \\sigma)^\\perp = \\Img \\hat{\\sigma}$.\n\\begin{theorem}\\label{thm-cyclic-extendable}\n\tIf $\\hat{A}$ is a cyclic right $R$-module and $A$ is pseudo-injective, then $A$ has the extension property.\n\\end{theorem}\n\\begin{proof}\n\tLet $C \\subset A^n$ be a code and let $f \\in \\Hom_R(C,A^n)$ be an isometry. By \\Cref{thm-isometry-criterium}, $f$ is extendable if and only if the solution $(\\U,\\V)$ of \\cref{eq-main-space-equation} is trivial.\n\tDue to the properties of the Fourier transform, \\cref{eq-main-space-equation} is equivalent to the following equality of functions defined on $\\hat{W}$,\n\t\t\\begin{equation}\\label{eq-dual}\n\t\t\\sum_{i = 1}^n \\card{V_i} \\id_{V_i^\\perp} = \\sum_{i = 1}^n \\card{U_i} \\id_{U_i^\\perp}\\;\n\t\t\\end{equation}\n\tand the solution of \\cref{eq-main-space-equation} is trivial if and only if the corresponding orthogonal solution is trivial.\n\tThe statement of the theorem is a direct consequence of \\Cref{lemma-nontrivial-iff-noncyclic} and the fact that the modules $V_i^\\perp = \\Img \\hat{\\lambda_i}$, $U_i^\\perp = \\Img \\hat{\\mu_i}$, $i \\in \\myset{n}$, are all cyclic, since so is $\\hat{A}$.\n\\end{proof}\n\n\\begin{remark}\n\t\\Cref{thm-cyclic-extendable} is an analogue of \\Cref{thm-wood-general-modules}, where instead of the cyclic socle condition we use the cyclic character module condition. Prove that these two conditions are equivalent. In \\cite{wood-foundations} it was proven that $\\soc(A)$ is cyclic if and only if $A$ can be embedded into $_R \\hat{R}$. This means there exists an injective homomorphism of left $R$-modules $\\phi : A \\rightarrow _R \\hat{R}$. Since $\\wedge$ is an exact functor, the last is equivalent to the fact that the map $\\hat{\\phi}: \\hat{\\hat{R}}_R \\cong R_R \\rightarrow \\hat{A}_R$ is a projective homomorphism of right $R$-modules that is a characterization of cyclicity of $\\hat{A}_R$. \\comment{Note that $\\Ker \\phi = A^\\perp$ and hence $R_R \/ A^\\perp \\cong \\hat{A}_R$.}\n\\end{remark}\n\n\n\n\\section{Extension theorem for matrix alphabets}\\label{sec-main}\nAn $R$-module $A$ is called \\emph{semisimple} (or completely reducible) if $A$ is a direct sum of simple submodules.\n\\begin{lemma}\\label{lemma-semisimple-pseudoinjective}\n\tIf a left $R$-module $A$ is semisimple, then $A$ is pseudo-injective. \n\\end{lemma}\n\\begin{proof}\n\tLet $N, M \\subseteq A$ be two submodules and let $\\psi: N \\rightarrow M$ be an isomorphism. Since $A$ is semisimple, there exist $N', M' \\subseteq A$ such that $A = N \\oplus N' = M \\oplus M'$. Since $N \\cong M$, there is an isomorphism $\\phi: N' \\rightarrow M'$. Then $\\psi$ extends to the automorphism $\\psi\\times \\phi: A = N \\oplus N' \\rightarrow M \\oplus M' = A$.\n\\end{proof}\n\nIt is proved in \\cite[p. 656]{lang} that each module $M$ over the ring $R = \\M_m(\\mathbb{F}_q)$ is semisimple and is isomorphic to $\\M_{m \\times k}(\\mathbb{F}_q)$ for some $k$. Call $k$ the dimension of $M$ and denote $\\dim M = k$.\nWe need the following lemmas to prove an extension theorem for $R$-linear codes over $M$.\n\n\n\n\\begin{lemma}\\label{lemma-binomial-sums}\n\tThe following equalities hold,\n\t\\begin{equation*}\n\t\\sum_{i = 0}^{t-1} (-1)^i q^{\\binom{i}{2}} \\binom{t}{i}_q = (-1)^{t-1} q^{\\binom{t}{2}}\\;,\n\t\\end{equation*}\n\t\\begin{equation*}\n\t\\sum_{i = 0}^t q^{\\binom{i}{2}} \\binom{t}{i}_q = \\prod_{i=0}^{t-1} (1 + q^i)\\;.\n\t\\end{equation*}\n\\end{lemma}\n\\begin{proof}\n\tUse a well-known Cauchy binomial theorem,\n\t\\begin{equation*}\\prod_{i=0}^{t-1} (1 + x q^i) = \\sum_{i = 0}^t q^{\\binom{i}{2}} \\binom{t}{i}_q x^i\\;.\n\t\\end{equation*}\n\tTo get the equalities in the statement, put $x = -1$ and $x = 1$.\n\\end{proof}\n\n\n\n\\begin{lemma}\\label{lemma-number-of-submodules}\n\tLet $R$ be a matrix module over $\\mathbb{F}_q$, let $M$ be a $t$-dimensional $R$-module and let $X$ be a $p$-dimensional submodule of $M$. For each $i \\in \\{p,\\dots, t\\}$ we have,\n\t\\begin{equation*}\n\t\\card{ \\{ V \\subseteq M \\mid X \\subseteq V, \\dim V = i \\} } = \\binom{t - p}{i - p}_q\n\t\\end{equation*}\n\\end{lemma}\n\\begin{proof}\n\tIt is a well known fact that for any $k \\in \\{0, \\dots, t\\}$ there are $\\binom{t}{k}_q$ submodules of $M$ of dimension $i$. The number of such submodules, that contain $X$ is equal to the number of submodules of dimension $i - \\dim X$ in $M\/X$ that contain $\\{0\\}$, which is always the case. In other words, $\\card{ \\{ V \\subseteq M \\mid X \\subseteq V, \\dim V = i \\} } = \\card{ \\{ V \\subseteq M\/X \\mid \\dim V = i - p \\} } = \\binom{\\dim M\/X}{i - p}_q = \\binom{t - p}{i - p}_q$.\n\\end{proof}\n\nThe next proposition is an improvement of \\Cref{thm-wood-matrix-module}. Note that the code length $K$ in \\Cref{thm-wood-matrix-module} depends on $k$ (the alphabet parameter) whereas in the following proposition the code length $N$ depends on $m$ (the ring parameter) and $N$ is not greater than $K$. In our proof this improvement is easily obtained from the author's construction in \\cite{wood-foundations}, however it does not appear in the original statement of the author. \n\\begin{proposition}\\label{thm-matrix-module-bound}\n\tLet $R = \\M_{m}(\\mathbb{F}_q)$ and let $A = \\M_{m \\times k}(\\mathbb{F}_q)$ be a left $R$-module.\n\t\n\tIf $k \\leq m$, then the alphabet $A$ has the extension property.\n\t\n\tIf $k > m$, there exist a linear code $C \\subset A^N$, $N = \\prod_{i=1}^{m} (1 + q^i)$, and a map $f \\in \\Hom_R(C,A^N)$ that is a Hamming isometry, but there is no monomial transformation extending $f$.\n\t\n\tFor any $n < N$, each Hamming isometry $f \\in \\Hom_R(C,A^n)$ is extendable.\n\\end{proposition}\n\\begin{proof}\nFor any $k$, since $A$ is semisimple, by \\Cref{lemma-semisimple-pseudoinjective}, $A$ is pseudo-injective.\nIf $k\\leq m$, the right $R$-module $\\hat{A}$ is cyclic, since $\\dim \\hat{A} = \\dim A = k \\leq m$. From \\Cref{thm-cyclic-extendable}, $A$ has the extension property.\n\nTo construct a code of the length $N$ we do the following. Let $C'$ be the code over the alphabet $B = M_{m \\times m+1}(\\mathbb{F}_q)$ and let $f \\in \\Hom_R(C',B^N)$ be the unextendable isometry from \\Cref{thm-wood-matrix-module}. Choosing this alphabet, we have $K = N$. \nSince $k > m$, in $A$ there exists a submodule isomorphic to $B$, so $C'$ can be considered as a code in $A^N$ and $f \\in \\Hom_R(C', A^N)$. Due to the construction of the author in \\cite{wood-foundations}, the code $C'$ has all zero column and $f(C')$ does not. Therefore $f$ is unextendable.\n\nLet $k > m$.\nLet $n'$ be the minimum value of the code length for which there exists and unextendable isometry $f \\in \\Hom_R(C,A^{n'})$. By \\Cref{thm-isometry-criterium}, there exists a non-trivial solution of \\cref{eq-main-space-equation}.\nHence, the minimum length $n$ of a nontrivial solution of \\cref{eq-main-space-equation} is not greater than $n'$.\nConsider a solution of the length $n$ which have the minimum value $\\max\\{ \\dim V_i, \\dim U_i \\mid i \\in \\myset{n} \\}$, and denote this value by $r$.\nFrom \\Cref{lemma-nontrivial-iff-noncyclic}, $r >m$.\nWithout loss of generality, let $\\dim {V_1} = r$.\n\nIntroduce a new notation. Denote $I_j = \\{ i \\mid \\dim V_i < r - j \\}$, $J_j = \\{ i \\mid \\dim U_i < r - j \\}$ and\n\\begin{equation*}\n\\Sigma_j= \\sum_{\\dim V = r - j} \\id_V\\;,\n\\end{equation*}\nfor $j \\in \\{ 0, \\dots, r \\}$, where the summation is over all the submodules in $V_1$ of the given dimension.\nCalculate the restriction of \\cref{eq-main-space-equation} on the module $V_1$,\n\\begin{equation*}\na\\Sigma_0 = \\sum_{i \\in J_0} \\id_{U_i \\cap V_1} - \\sum_{i \\in I_0} \\id_{V_i \\cap V_1}\\;,\n\\end{equation*}\nwhere by $a \\geq 1$ we denote the number of modules $V_1$ in the left part of \\cref{eq-main-space-equation}.\nThis is a nontrivial solution of the length $n$ and the maximum dimension $r$. Evidently, since the length $n$ is the minimal, $U_i \\cap V_1 \\subset V_1$ for all $i \\in J_0$ and $V_i \\cap V_1 \\subset V_1$ for all $i \\in I_0$, so, without loss of generality, let $U_i, V_i \\subseteq V_1$, for $i \\in \\myset{n}$.\n\nSay that on the $t$-step, $0 \\leq t < r$, we have proved that \\cref{eq-main-space-equation} is of the form,\n\\begin{equation*}\n\ta \\sum_{i = 0}^t (-1)^iq^{\\binom{i}{2}} \\Sigma_i = \\sum_{i \\in J_t} \\id_{U_i} - \\sum_{i \\in I_t} \\id_{V_i}\\;.\n\\end{equation*}\nFor $t = 0$ this is true.\nLet $X \\subset V_1$ be of dimension $r - t - 1$. Restrict the equation on $X$,\n\\begin{align*}\na \\sum_{i = 0}^t (-1)^i q^{\\binom{i}{2}} \\sum_{\\dim V = r - i}\\id_{V \\cap X} \n= \\sum_{i \\in J_{t}} \\id_{U_i\\cap X} - \\sum_{i \\in I_{t}} \\id_{V_i\\cap X} \\;.\n\\end{align*}\nThe dimension of $X$ is smaller than $r$, so the restricted solution is trivial.\nCalculate the number of $\\id_X$ terms from the left and from the right.\nDenote $b = \\card{\\{ i \\in J_t \\mid X = U_i \\}}$ and $c = \\card{\\{ i \\in I_t \\mid X = V_i \\}}$. Note that either $b=0$ or $c=0$. Using \\Cref{lemma-number-of-submodules} and \\Cref{lemma-binomial-sums},\n\\begin{equation*}\na \\sum_{i = 0}^t (-1)^i q^{\\binom{i}{2}} \\binom{t+1}{i}_q = a(-1)^t q^{\\binom{t+1}{2}} \\binom{t+1}{i}_q = b - c\\;,\n\\end{equation*}\nand therefore $c = 0$ if $t$ is even and $b = 0$ if $t$ is odd.\n\nAll the submodules of $V_1$ of dimension $r - t - 1$ are presented from the left or from the right side of \\cref{eq-main-space-equation}, depending on the parity of $t$, with the same multiplicity. Considering this fact, we rewrite \\cref{eq-main-space-equation} in the form,\n\\begin{equation*}\na \\sum_{i = 0}^{t+1} (-1)^i q^{\\binom{i}{2}} \\Sigma_i = \\sum_{i \\in J_{t+1}} \\id_{U_i} - \\sum_{i \\in I_{t+1}} \\id_{V_i}\\;.\n\\end{equation*}\nOn the output of $t = r - 1$ step we get,\n\\begin{equation*}\na \\sum_{i = 0}^{r} (-1)^i q^{\\binom{i}{2}} \\Sigma_i = \\sum_{i \\in J_{r}} \\id_{U_i} - \\sum_{i \\in I_{r}} \\id_{V_i} \\equiv 0\\;.\n\\end{equation*}\nThe length of the equation is $\\frac{1}{2}a\\sum_{i = 0}^{r} q^{\\binom{i}{2}} \\binom{r}{i}_q$. Since the equation has the minimal length, $a = 1$ and $r = m + 1$.\nFrom \\Cref{lemma-binomial-sums}, $n = \\frac{1}{2} \\prod_{i=0}^{m} (1 + q^i) = \\prod_{i=1}^{m} (1 + q^i) = N$. Therefore $n' \\geq n = N$.\n\\end{proof}\n\n\\section{Extension theorem for MDS codes}\nThere is a famous Singleton bound, that states that for a code $C \\subseteq A^n$, $\\card{C} \\leq \\card{A}^{n - d + 1}$, where $d$ is the minimum distance of $C$. When a code $C$ attains the bound, it is called an MDS code. The value $k = n - d +1$ is called a dimension $C$ and the code is said to be an $(n,k)_A$ MDS code.\n\nAn alternative definition is the following. A code $C \\subseteq A^n$ is MDS if and only if a restriction of $C$ on any $k$ columns is isomorphic to $A^k$. In other words, any $k$ columns of $C$ can be taken as an information set of the code. We interpret this definition in terms of modules $\\V = (V_1,\\dots,V_n)$.\n\\begin{lemma}\\label{lemma:mds-modules}\n\tLet $C$ be an $(n, k)_A$ MDS code. For each subset $I \\subseteq \\myset{n}$ of size $k$, $\\sum_{i \\in I} V_i^{\\perp} = \\hat{W}_R$. Moreover, $\\card{V_i^\\perp} = \\card{A}$ for all $i \\in \\myset{n}$.\n\\end{lemma}\n\\begin{proof}\n\tLet $I \\subseteq \\myset{n}$ be a subset with $k$ elements. Let $C'$ be a code obtained from $C$ by keeping only coordinates from $I$. The map $\\lambda' = (\\lambda_i)_{i \\in I}$, $\\lambda': W \\rightarrow A^k$ is a parametrization of $C'$. Since $C$ is MDS, $\\lambda'$ is injective, which implies $\\bigcap_{i \\in I} V_i = \\{0\\}$. Calculating the orthogonal, we get $\\sum_{i \\in I} V_i^\\perp = \\hat{W}_R$.\n\t\n\tWe know that all the modules $W,C,C'$ are isomorphic to $A^k$. Thus there is an isomorphism of right $R$-modules $\\hat{W}\\cong \\hat{A}^k$. Also, $\\card{V_i^\\perp} \\leq \\card{A} = \\card{\\hat{A}_R}$ and for any $i,j \\in \\myset{n}$, $\\card{V_i^\\perp + V_j^\\perp} = \\card{V_i^\\perp}\\card{V_j^\\perp}\/\\card{V_i^\\perp \\cap V_j^\\perp}$. Combining all the facts, we get $\\card{V_i^\\perp} = \\card{A}$.\n\\end{proof}\n\nThe next lemma shows that the condition of pseudo-injectivity in \\Cref{thm-isometry-criterium} can be omitted if a code is MDS.\n\n\\begin{lemma}\\label{lemma:mds-pseudoinjectivity}\n\t\tLet $C$ be an $(n, k)_A$ MDS code and let $f \\in \\Hom_R(C,A^n)$. If $\\V = \\U$, then $f$ extends to a monomial map.\n\\end{lemma}\n\\begin{proof}\n\tThe proof is almost identical to the second part of the proof of \\Cref{thm-isometry-criterium}.\n\tLet $\\sigma, \\tau \\in \\Hom_R(W,A)$ be two maps that parametrize a column in $C$ and a column in $f(C)$ correspondingly. Since $C$ is an MDS code, from \\Cref{lemma:mds-modules}, $\\Img \\sigma = A$, because $\\card{\\Img \\sigma} = \\card{(\\Ker \\sigma)^\\perp} = \\card{A}$.\n\t\n\tLet $\\Ker \\sigma = \\Ker \\tau = N \\subseteq W$. This implies $\\Img \\tau = \\Img \\sigma = A$.\n\tConsider the canonical isomorphisms $\\bar{\\sigma}, \\bar{\\tau}: W\/N \\rightarrow A$.\n\tThe map $h\\in \\Aut_R(A)$, defined as $h = \\bar{\\tau}\\bar{\\sigma}^{-1}$, satisfies the equality $h \\sigma = \\tau$.\n\\end{proof}\n\n\\begin{theorem}\\label{thm-mds-extension-theorem}\n\tLet $R$ be a ring with identity and let $A$ be a finite left $R$-module. Let $C$ be an $(n, k)_A$ MDS code, $k\\neq 2$. Each Hamming isometry $f \\in \\Hom_R(C,A^n)$ extends to a monomial map. \n\\end{theorem}\n\\begin{proof}\n\tAssume that there exists an unextendable isometry $f \\in \\Hom_R(C,A^n)$. From \\Cref{thm-isometry-criterium} and \\Cref{lemma:mds-pseudoinjectivity}, there exists a nontrivial solution of \\cref{eq-main-space-equation}, or equivalently, there exists a nontrivial solution of the orthogonal equation (\\ref{eq-dual}). It is clear that $f(C)$ is also an MDS code.\n\t\n\tThe proof is obvious for the case $k = 1$, so let $k \\geq 3$. This means, from \\Cref{lemma:mds-modules}, for any different $i,j,k \\in \\myset{n}$, $V_i^\\perp \\cap (V_j^\\perp + V_k^\\perp) = \\{0\\}$.\n\tWithout loss of generality, assume that $U_1^\\perp$ is covered nontrivially by modules $V_1^\\perp,\\dots, V_t^\\perp$, $t>1$, i.e. $U_1^\\perp = \\bigcup_{i=1}^t V_i^\\perp$, $\\{0\\} \\subset V_i^\\perp \\subset U_1^\\perp$, for $i \\in \\myset{t}$ and no module is contained in another.\n\n\tTake a nonzero element $a \\in U_1^\\perp \\cap V_1^\\perp$ and a nonzero element $b \\in U_1^\\perp \\cap V_2^\\perp$. Obviously, since $V_1^\\perp \\cap V_2^\\perp = \\{0\\}$, $a + b \\not\\in V_1^\\perp \\cup V_2^\\perp$.\n\tBut $a+b \\in U_1^\\perp$ and hence $t>2$. There exists an index $i$, let it be $3$, such that $a + b \\in U_1^\\perp \\cap V_3^\\perp$. Then $a+b \\in (V_1^\\perp + V_2^\\perp) \\cap V_3^\\perp \\neq \\{0\\}$, which gives a contradiction.\n\\end{proof}\n\nThe case of MDS codes of dimension 2 is observed in \\cite{d3}, where $R$ is a finite field and the alphabet $A$ is a vector space. Note that the statement is true for all abelian groups as $\\mathbb{Z}$-modules. In \\cite{forney} the author proved that there exists only $(n,1)_G$ and $(n,n)_G$ MDS codes over a nonabelian group $G$. It is not difficult to show that an analogue of the extension property holds for these two families of trivial codes.\n\n\\footnotesize\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn recent years there has been a resurgence of interest in measurements and studies of the Extra-galactic Background Light or EBL, (e.g., \\citealt{Gervasi_2008}; \\citealt{Vernstrom_2011}; \\citealt{Driver_2016}; \\citealt{Abdallah_2018}; \\citealt{Lauer_2022}; Saldana-Lopez et al. 2022, submitted). The EBL is the term used to describe all radiation incident on the Earth of extra-galactic origin from a steradian of sky, i.e., it should exclude any sky-glow, Zodiacal and Diffuse Galactic Light components, as well as any light from the Milky-Way group. In terms of the origin of the EBL it can be divided into radiation arising from two eras (epochs): the Cosmic Microwave Background which represents relic radiation from the hot early Universe at the time of recombination and redshifted to the present day; and the remainder which is all radiation produced from all eras since recombination. The latter predominantly arises from star-formation, accretion onto super-massive black holes (i.e., Active Galactic Nuclei; AGN), and dust reprocessing which typically transfers ultraviolet and optical radiation to the mid and far infrared as thermal emission. \n\nFor convenience the EBL is also often broken into distinct wavelength regimes that cover the cosmic $\\gamma$-ray (CGB), X-ray (CXB), ultraviolet (CUB), optical (COB), infrared (CIB), the CMB, and radio (CRB) backgrounds. Of these backgrounds the CMB is the strongest, in terms of its integrated energy density, and approximately 5$\\times$ the sum of the other backgrounds combined \\citep{Hill_2018}. Of these other backgrounds the COB and CIB are roughly equal \\citep{Driver_2016}, and together comprise most of the non-CMB energy contribution.\n\nThe recent resurgence of interest in the EBL, is in part due to technological breakthroughs that have allowed the measurement of the various backgrounds to much lower flux limits within each waveband. This has allowed measurements to evolve from upper or lower limits to credible measurements. This is possible because in almost all bands we are now able to resolve the peak contribution to the EBL by constructing galaxy or source counts that show the number-density of discrete sources as a function of flux (see recent summary by \\citealt{Hill_2018}). Overall, our measurements of the EBL now extend from $\\gamma$-rays (VHE; \\citep{Biteau_2015, Abeysekara_2019, Abdollahi_2018} through to X-ray \\citep{Giacconi_1962, Gilli_2007}, the optical and infrared \\citep{Finke_2010, Driver_2016}, and ultimately into the radio \\citep{Murphy_2018, de_Zotti_2009, Fixsen_2011, Vernstrom_2011}. \n\nHowever, the resurgence also stems from the growing appreciation that many of the physical phenomena we wish to study (e.g., star-formation and AGN), result in photon production that manifests across many wavelengths. For example, star-formation results in energy production from X-ray binaries \\citep{Natarajan_2000, Khaire_2019}, supernova explosions, thermal radiation, synchrotron and free-free emission \\citep{Condon_1984,Condon_1992,Dale_2014}. As a consequence the latest generation of Spectral Energy Distribution fitting models (e.g., CIGALE, \\citeauthor{Boquien_2019}, \\citeyear{Boquien_2019}; ProSpect, \\citeauthor{Robotham_2020} \\citeyear{Robotham_2020}), as well as the latest generation of semi-analytic simulations \\citep{Lagos_2018, Inoue_2013, Baes_2019} now extend from the X-ray to radio domains.\n\nThis insight follows from our understanding that star-formation inferred in the optical\/near-IR can now be used to accurately predict the radio continuum fluxes \\citep{van_der_Kruit_1971, Davies_2017} and the inverse. For example, estimates of the cosmic star-formation history \\citep{Madau_2014} can now be obtained from Very High Energy studies of Blazars and the interaction of their flux with the CIB \\citep{Abdollahi_2018}, or from CRB constraints \\citep{oldMatthews_2021, Nitu_2021} alone. Similarly, AGN are known to produce radiation at all wavelengths albeit in a stochastic manner, in which none, one, two or all three wavelength regimes (X-ray, optical\/IR and radio) might be active or not at any one time. Hence to fully understand the AGN life-cycle and their role within galaxy formation also necessitates panchromatic astronomy from X-ray to radio wavelengths.\n\nThe long-term potential of EBL studies is that if we can truly understand the history of star-formation, the evolution of super-massive black holes, and the role of dust processing, then we should be able to explain the entire cosmic photon flux incident on the Earth. This would be a remarkable achievement. Even more promising perhaps, is the inverse, whereby accurate measurements of the EBL and its subdivision into redshift slices (a.k.a., the Cosmic Spectral Energy distributions) might allow us to reconstruct the entire cosmic star-formation history and its dependencies on model ingredients such as the initial mass function and metallicity evolution, for example, see early attempts to do this by \\citet{Koushan_2021} or Saldana-Lopez et al. (2022, submitted).\n\nHowever, at present there is tension in the EBL measurements between direct and indirect methods. See for example the review by \\citet{Driver_2021}. This manifests most in the COB, CIB and CRB in which direct and indirect measurements disagree by factors of $\\times 3-10$. Direct measurements typically necessitate the measurement of the \"above Earth sky\" followed by subtraction of foregrounds such as the Zodiacal Light and Diffuse Galactic Light for the COB and CIB, and subtraction of radio radiation emerging from the Galactic halo and surrounding high-velocity clouds and removal of the CMB for the CRB estimate \\citep{Fixsen_2011}. Indirect estimates include the measurement and modelling of galaxy number-counts (COB, CIB), or source counts (CRB), which are only sensitive to discrete sources of radiation (i.e., galaxies and AGN) but benefit by not being plagued by overwhelming foregrounds (COB, CIB) or backgrounds (CRB). \n\nIn the optical, the difficulty in the direct measurements likely stems from the subtraction of the Zodiacal Light and Diffuse Galactic Light which are typically 10-100$\\times$ the level of the EBL and dependent on direction and time of observation within the year. There is also a growing appreciation of the impact of sky-glow \\citep{Caddy_2022} on direct background estimates from facilities such as the Hubble Space Telescope (\\textit{HST}). In the COB and CIB the long standing factor of $\\sim 10$ discrepancy (see for example \\citealt{Bernstein_2002}) has possibly now been resolved by indirect constraints from Very High Energy constraints (e.g., \\citealt{Aharonian_2006}; \\citealt{Ahnen_2016}; \\citealt{HESS_2013}; \\citealt{Abdollahi_2018}) appearing to corroborate the low EBL results from integrated source counts. This also follows from a deeper understanding of the Zodiacal Light from recently revised \\textit{HST} Sky-Surface brightness measurements \\citep{Windhorst_2022}. However some level of tension between the direct and indirect COB does still remain as recent results from the New Horizons instrument, from distances beyond where Zodiacal Light should be significant, still show a factor of $\\times$2 inconsistency with the indirect count and VHE constraints, \\citep{Lauer_2021, Lauer_2022}.\n\nIn the radio there also appears to be an immutable discrepancy of a factor of 2-5$\\times$ between the (high) direct measurements from instruments such as the Absolute Radiometer for Cosmology, Astrophysics and Diffuse Emission (ARCADE-2; \\citealt{Fixsen_2011}), and recent source count studies (see for example \\citealt{Vernstrom_2011}). In the radio the critical subtraction for these direct measurements is the Raleigh-Jeans tail of the CMB which still outshines the radio source count signal at high frequencies. However, a compounding problem is that most source counts are not quite deep enough to become convergent. Hence, source count estimates of the CRB require sophisticated modeling of the SFG population (see for example \\citealt{Wilman_2008}; \\citealt{Hopkins_2003}; \\citealt{Seymour_2008}; \\citealt{Seymour_2004} ; \\citealt{Mancuso_2017}).\n\nIn this paper, which forms part of our broader work on the overall EBL, we revisit the current constraints on the CRB from source count data across a broad frequency range, and building on earlier studies and compendiums by \\citet{Windhorst_2003}, \\citet{Vernstrom_2011}, and \\citet{de_Zotti_2009}. This is motivated on the basis of extending our earlier optical-IR motivated models into the radio \\citep{Andrews_2018, Koushan_2021}, and also in preparation for upcoming surveys scheduled to take place on a myriad of new radio facilities (i.e., ASKAP, MeerKAT, LOFAR, MWA etc) in anticipation of the upcoming Square Kilometer Array \\citep{Bonaldi_2019}. These facilities, and the SKA especially, should have the capacity to finally extend source counts to sufficiently low flux limits for the CRB to be convergent at all frequencies without recourse to models or extrapolations. Hence, this paper is intended to act as a useful reference of previous works, as well as a launchpad for future studies using imminent new technologies.\n\nIn the later stages of the paper we use the recent SHARK semi-analytic simulation \\citep{Lagos_2018} to model the SFG population to allow us to extrapolate our source counts in a meaningful manner by including the star-formation population that lies below our detection thresholds. This also opens the door for us to constrain the CRB using the model predictions to separate out the CRB contribution from the AGN and SFG populations separately.\n\nIn Section 2 we describe the compendium of radio data that we have assembled across 7 frequency bands: 150\\,MHz, 325\\,MHz, 610\\,MHz, 1.4\\,GHz, 3\\,GHz, 5\\,GHz and 8.4\\,GHz and we describe our method for implementing spectral index corrections when necessary and show our source-counts, source counts normalized to Euclidean ($N\/S^{-2.5}$), and the more useful rendition that shows the contribution of each flux interval to the total energy density ($N\/S^{-2}$). In Section 3 we describe our attempts to extract the CRB for the AGN and SFG using simple spline fitting with and without extrapolation. In Section 4 we describe a more robust method for extracting the SFG and AGN CRB contributions by fitting of the recent SHARK semi-analytic model \\citep{Lagos_2018}. Finally, in Section 5 we show how our ProSpect EBL model extends into the radio and how good the cosmic star-formation history is at matching the CRB for the SFG population.\n\nWe present our conclusions in Section 6.\nThroughout we adopt a standard cosmology of $H_o=70$ km\/s\/Mpc, $\\Omega_{\\rm M}=0.3$ and $\\Omega_{\\Lambda}=0.7$.\n\n\\section{Radio surveys, catalogues and data}\nHere we aim to provide a complete compendium of radio source count data covering 7 frequencies: 150\\,MHz, 325\\,MHz, 610\\,MHz, 1.4\\,GHz, 3\\,GHz, 5\\,GHz, 8.4\\,GHz. Note that this compendium does not include all the recent 887.5 and 943.5 MHz wavelength data as this is only just emerging from ASKAP via surveys such as Evolutionary Map of the Universe (EMU; see for example \\citealt{Norris_2021}; \\citealt{Gurkan_2022}). In total, our final compendium is comprised of 74 distinct radio surveys spread across 16 different frequencies and corrected to one of the 7 frequencies listed above. This compendium is designed to cover the largest flux density range possible in each frequency band as a baseline reference for future survey programs. The data hence includes large all-sky surveys such as VLASS \\citep{Gordon_2021}, NVSS \\citep{Condon_1998}, and LoTSS \\citep{Hardcastle_2021} which provide catalogues often numbering into the millions of objects, and excellent statistics for bright source counts. These are complemented by deep small area surveys which reach to very faint flux levels extending down and slightly below ~1\\,mJy. \\\\\n\nThe data assembled in this paper were taken with many different instruments and at many different frequencies over the last 5 decades. Early data was typically taken by single-dish radio telescopes, such as the National Radio Astronomy Organization (NRAO) 300-ft telescope, which provides some of the oldest source counts originally assembled as part of the \\citet{Windhorst_2003} compendium. This data provides the brightest source counts at 1.4\\,GHz, and which were used for the first demonstration of the expected Euclidean behavior of the density of bright source counts in the local universe. Multi-dish synthesis telescopes such the Westerbork Radio Synthesis Telescope (WSRT), the Australian Compact Telescope Array (ATCA), and the Very Large Array (VLA) provide much of the bright source counts in the modern era, achieving finer resolution than is possible with single-dish telescopes and extending to higher frequencies. Similar arrays such as the Giant Meterwave Radio Telescope (GMRT) provide a similar benefit at lower frequencies. Additional counts are also provided by the latest generation of radio facilities that include LOFAR, MWA, ASKAP and MeerKAT (with the latter three representing the SKA precursors).\n\nOur data also builds on previous compendiums of radio source count data such as that provided by \\citet{Windhorst_2003} and \\citet{Vernstrom_2011}. In particular the \\citet{Vernstrom_2011} study used the data available at the time to make a state-of-the art multi-frequency estimate of the discrete radio EBL covering six frequencies. These compendiums and the references therein provided the starting point for gathering the source count data described in the current paper. \n\nIn assembling our final data compendium, we adjusted all of the data to the same format, essentially frequency and source density per steradian, along with upper and lower limits with additional information such as central frequency, area and survey labels. Hence we have constructed a single homogenized table of source counts covering all contributing surveys and frequencies and which we provide as a Machine Readable Table, a sample of which is shown in the Appendix as Table \\ref{tab:A.4}.\n\nIn Table\\, \\ref{tab:inputdata}, we provide a summary of the data sets used in this paper, and include in the Appendix a breakdown of the compendium from \\citet{Windhorst_2003} as Table \\ref{tab:A.1}. In these tables we provide the original frequency, reference, instrument, and area surveyed in steradians, for all data sets used. Where possible, we provide the field or survey name and appropriate field center. Finally, we provide the Full Width at Half Maximum ({\\sc fwhm}) of the radio survey instrument's beam in arcseconds. When given in the paper, the stated value for the beam is used. If the beam was not specified, it is calculated by using {\\sc fwhm} $= 1.22\\times \\frac{\\lambda}{D}$ where $\\lambda$ is the central wavelength of the receiver bandwidth used and $D$ is the telescope diameter or maximum baseline, both in meters, where {\\sc fwhm} in radians is converted to arcseconds.\n\n\n\\begin{table*}\n\\label{tab:data}\n\\caption{Summary of the various data sets used in this paper.\n\\label{tab:inputdata}}\n\\begin{tabular}{p{1.0cm}p{3.0cm}p{1.5cm}p{2.0cm}p{3.0cm}p{1.5cm}p{1.5cm}p{2.0cm}} \\\\ \\hline\nFrequency & Reference & Instrument & Field\/Survey & Field Location & Area & Beam FWHM & Confusion Source Density\/ \\\\ \n(MHz) & & & Name & & (Ster) & (Arcsec) & Faintest Source Bin (mJy) \\\\ \\hline\n154 & \\citet{Franzen_2016} & MWA & MWA EoR0 & $\\alpha = 00^h 00^m 00^s,\\delta = -27^{\\circ} 00^{'} 00^{''}$ & 0.1736 & 138.6 & $1.14\\times 10^{-2}$ \\newline 33.8 \\newline\\\\ \n150 & \\citet{Mandal_2021} & LOFAR & Lockman; Hole Bo\u00f6tes; ELAIS-N1 & Multiple Fields & $2.31\\times 10^{-2}$ & 6 & 1.35 \\newline 0.22 \\\\ \n151 & \\citet{McGilchrist_1990} & CLFST & Unnamed Fields & 2 Field Centers & 0.144 & 70 & $1.32\\times 10^{-3}$ \\newline 105.9 \\newline\\\\ \n151.5 & \\citet{Hardcastle_2021} & LOFAR & LoTSS-DR2, Bo\u00f6tes, ELAIS-N1, Lockman & Multiple & 0.6 (LoTSS-DR2) $2.78\\times 10^{-3}$ (Summed total of deep fields) , & 6 & 0.106 \\newline 0.01 \\\\ \n150 & \\citet{Williams_2016} & LOFAR & Bo\u00f6tes \\& Multiple Pointing Centers & Bo\u00f6tes - $\\alpha = 14^h 32^m 00^s,\\delta = +34^{\\circ} 30^{'} 0^{''}$ & $5.79\\times 10^{-3}$ & 4.19 & $3.07\\times 10^{-4}$ \\newline 0.71\\\\\n200 & \\citet{Hurley-Walker_2016} & MWA & GLEAM & All-Southern Sky Excluding Galactic Plane and Magellanic Clouds & 7.5640 & 120 & $1.02\\times 10^{-4}$ \\newline 25\\\\ \n325 & \\citet{Mazumder_2020} & GMRT & Lockman Hole & $\\alpha = 10^h 48^m 00^s,\\delta = 58^{\\circ} 08^{'} 00^{''}$ & $1.83\\times 10^{-3}$ & 9 & $1.42\\times 10^{-3}$ \\newline 0.4 \\newline\\\\ \n327 & \\citet{Oort_1988} & WSRT & Lynx & $\\alpha = 08^h 39^m 52.5^s,\\delta = +43^{\\circ} 52^{'} 44^{''}$ & $1.54\\times 10^{-2}$ & 64.7 & $1.67\\times 10^{-2}$ \\newline 6.71 \\newline \\\\ \n325 & \\citet{Riseley_2016} & GMRT & Super-Class; Abell & Multiple Pointing Centers & $1.98\\times 10^{-3}$ & 13 & $5.29\\times 10^{-3}$ \\newline 0.242\\newline\\\\\n325 & \\citet{Sirothia_2009} & GMRT & ELAIS-N1 & $\\alpha = 16^h 10^m 00^s,\\delta = 54^{\\circ} 36^{'} 00^{''}$ & $7.62\\times 10^{-5}$ & 8.31 & $1.35\\times 10^{-3}$ \\newline 0.367\\newline\\\\ \n610 & \\citet{Bondi_2006} & GMRT & VVDS-VLA & $\\alpha = 02^h 26^m 00^s,\\delta = -04^{\\circ} 30^{'} 00^{''}$ & $3.05\\times 10^{-4}$ & 6 & $4.20\\times 10^{-4}$ \\newline 0.37\\newline\\\\ \n610 & \\citet{Garn_2008} & GMRT & ELAIS-N1 & $\\alpha = 16^h 11^m 00^s,\\delta = 55^{\\circ} 00^{'} 00^{''}$ & $3.05\\times 10^{-4}$ & 5.477 & $3.94\\times 10^{-5}$ \\newline 0.338\\newline\\\\ \n887.5 & \\citet{Hale_2021} & ASKAP & RACS & All-Sky South of $\\delta = 41^{\\circ} 00^{'} 00^{''}$ & 8.535 & 25 & $1.46\\times 10^{-3}$ \\newline 1.40\\newline\\\\ \n610 & \\citet{Ibar_2009} & GMRT & Lockman Hole & Multiple Pointing Centers & $2.99\\times 10^{-4}$ & 4.25 & $2.34\\times 10^{-3}$ \\newline 0.056\\newline\\\\ \n610 & \\citet{Moss_2007} & GMRT & 1H XMM-Newton\/Chandra & $\\alpha = 1^h 11^m 00^s,\\delta = -4^{\\circ} 00^{'} 00^{''}$ & $2.72\\times 10^{-4}$ & 6.69 & $3.53\\times 10^{-4}$ \\newline 0.49\\newline\\\\ \n610 & \\citet{Ocran_2020}& GMRT & ELAIS-N1 & $\\alpha = 16^h 10^m 30^s,\\delta = 54^{\\circ} 35^{'} 00^{''}$ & $5.66\\times 10^{-4}$ & 6 & $2.97\\times 10^{-3}$ \\newline 0.078\\newline\\\\ \n1400 & \\citet{Biggs_2006} & VLA & HDFN, ELAIS-N1 & Multiple Pointing Centres & $8.12\\times 10^{-5}$ & 1.51 & $2.30\\times 10^{-4}$ \\newline 0.402\\newline\\\\\n1400 & \\citet{Condon_1998} & VLA & NVSS & All-Sky North of $\\delta = -40^{\\circ} 00^{'} 00^{''}$ & 11.34 & 45 & $2.06\\times 10^{-3}$ \\newline 3.16\\newline\\\\ \n1400 & \\citet{Fomalont_2006} & VLA & SSA13 & $\\alpha = 13^h 12^m 2,\\delta = +42^{\\circ}38^{'} 0^{''}$ & $9.78\\times 10^{-5}$ & 1.54 & $2.92\\times 10^{-4}$ \\newline 0.034\\newline \\\\ \n1400 & \\citet{Hopkins_2003} & ATCA & Phoenix Deep Survey-Deep & $\\alpha = 01^h 14^m 12^s,\\delta = +45^{\\circ} 44^{'}8^{''}$ & $9.21\\times 10^{-5}$ & 8.48 & $4.69\\times 10^{-3}$ \\newline 0.057\\newline\\\\ \n\n\\end{tabular}\n\\end{table*}\n\n\\setcounter{table}{0}\n\n\\begin{table*}\n\\caption{(cont'd) Summary of the various data sets used in this paper.}\n\\begin{tabular}{p{1.0cm}p{3.0cm}p{1.5cm}p{2.0cm}p{3.0cm}p{1.5cm}p{1.5cm}p{2.0cm}} \\\\ \\hline\nFrequency & Reference & Instrument & Field\/Survey & Field Location & Area & Beam FWHM & Confusion Source Density \\\\ \n(MHz) & & & Name & & (Ster) & (Arcsec) & Faintest Source Bin (mJy) \\\\ \\hline\n\n1400 & \\citet{Hopkins_2003} & ATCA & Phoenix Deep Survey-Primary & $\\alpha = 01^h 14^m 12^s,\\delta = +45^{\\circ} 44^{'}8^{''}$ & $1.37\\times 10^{-3}$ & 8.48 & $2.34\\times 10^{-3}$ \\newline 0.1\\newline \\\\ \n1400 & \\citet{Huynh_2005} & ATCA & HDFS & $\\alpha = 22^h 33^m 25^s,\\delta = - 60^{\\circ} 38^{'} 09^{''}$ & $1.06\\times 10^{-4}$ & 6.6 & $2.65\\times 10^{-3}$ \\newline 0.059\\newline\\\\\n1400 & \\citet{Matthews_2021} & MeerKAT & DEEP2 & $\\alpha = 04^h 13^m 26^s, \\delta=-80^{\\circ} 0^{''} 0^{'}$ & $3.17\\times 10^{-4}$ & 7.6 & $2.56\\times 10^{-2}$ \\newline 0.0126\\newline\\\\ \n1400 & \\citet{Prandoni_2000} & ATCA & ATESO & Multiple Pointing Centres & $7.92\\times 10^{-3}$ & 7.6 & $3.33\\times 10^{-4}$ \\newline 0.83\\newline\\\\ \n1400 & \\citet{Seymour_2008} & VLA & $13^h$ Chandra Deep Field & $\\alpha = 13^h 34^m 37^s,\\delta = +37^{\\circ} 54^{'} 44^{''}$ & $5.97\\times 10^{-5}$ & 3.3 \\\\ \n1400 & \\citet{Windhorst_2003} & Multiple & N\/A & Large Data Compendium & See Appendix for Windhorst compendium information & \\\\ \n2100 & \\citet{Butler_2017}& ATCA & XXL-S & $\\alpha = 23^h 30^m 00^s,\\delta = -55^{\\circ} 0^{'} 0^{''}$ & $7.62\\times 10^{-3}$ & 4.76 & $1.12\\times 10^{-4}$ \\newline 0.338 \\newline\\\\ \n3000 & \\citet{Gordon_2021} & VLA & VLASS & Entire Sky North of $\\delta = -40^{\\circ} 00^{'} 00^{''}$ & 10.75 & 2.5 & $5.30\\times 10^{-6}$ \\newline 2.50 \\newline\\\\ \n3000 & \\citet{Smolcic_2017} & VLA & COSMOS & $\\alpha = 10^h 00^m 28.6^s,\\delta = - 02^{\\circ} 12^{'} 21^{''}$ & $6.09\\times 10^{-4}$ & 0.75 & $1.41\\times 10^{-4}$ \\newline 0.011 \\newline\\\\ \n3000 & \\citet{Van_der_Vlugt_2021} & VLA & COSMOS & $\\alpha = 10^h 00^m 28.6^s,\\delta = - 02^{\\circ} 12^{'} 21^{''}$ & $1.52\\times 10^{-5}$ & 1.97 & $7.02\\times 10^{-3}$ \\newline $3.72\\times 10^{-3}$ \\newline\\\\ \n3000 & \\citet{Vernstrom_2016} & VLA & Lockman Hole & $\\alpha = 10^h 46^m 00^s,\\delta = +59^{\\circ} 0^{'} 0^{''}$ & $1.60\\times 10^{-5}$ & 8 & $1.36\\times 10^{-2}$ \\newline 0.013 \\newline\\\\ \n5500 & \\citet{Huynh_2015} & ATCA & Chandra Deep Field South & $\\alpha = 3^h 32^m 28.0^s,\\delta = - 27^{\\circ} 48^{'} 30^{''}$ & $7.62\\times 10^{-5}$ & 3.16 & $2.60 \\times 10^{-4}$ \\newline 0.051 \\newline\\\\ \n5000 & \\citet{Prandoni_2006} & ATCA & ASTEP & Large Survey Near SGP at $\\alpha = 22h,\\delta = - 40^{\\circ}$ \\& & $3.05\\times 10^{-4}$ & 10.1 & $3.10\\times 10^{-4}$ \\newline 0.44 \\newline\\\\ \n4850 & \\citet{Windhorst_2003} & Multiple & N\/A & Large Data Compendium & See Appendix For Windhorst Compendium Information & \\\\ \n8500 & \\citet{Henkel_2005} & VLA & HDFN & $\\alpha = 12^h 36^m 49.4^s,\\delta = - 62^{\\circ} 12^{'} 58^{''}$ & $4.57\\times 10^{-4}$ & 2.28 & $1.92\\times 10^{-5}$ \\newline 0.21 \\newline\\\\ \n9000 & \\citet{Huynh_2020} & ATCA & Chandra Deep Field South & $\\alpha = 3^h 32^m 28.0^s,\\delta = - 27^{\\circ} 48^{'} 30^{''}$ & $8.41\\times 10^{-5}$ & 2.28 & $2.52\\times 10^{-4}$ \\newline 0.109 \\newline\\\\ \n8400 & \\citet{Windhorst_2003} & Multiple & N\/A & Large Data Compendium & See Appendix For Windhorst Compendium Information & \\\\ \\hline\n\\end{tabular}\n\\end{table*}\n\n\\clearpage\n\nOn Figure \\ref{fig:fig2} we show our compendium of counts for 3\\,GHz and the equivalent figure for all other frequencies are shown in the Appendix as Figure\\,\\ref{fig:A1}. We show in the top panels the source counts, in the middle panel the source counts divided by the Euclidean slope ($S^{-2.5}$), and in the lower panel the source counts divided by $S^{-2}$ and which best shows the contribution of each flux interval to the overall EBL in that frequency. For the 3\\,GHz data shown on Figure\\,\\ref{fig:fig2} we can see that in the lower panel we have a two humped distribution, reflecting the contribution from AGN (right hump) and SFG (left hump) with each providing a roughly equal contribution to the total EBL (the full integral under the curve.)\n\n\n\\subsection{Spectral index corrections}\n\\label{Section_2.1}\nIn order to compare radio data taken at slightly different frequencies, corrections must be applied. For example, the 1.4\\,GHz source counts have been traditionally obtained at frequencies ranging from 1.4--1.5\\,GHz. Hence corrections need to be applied to bring the central frequency of each survey onto a common scale. Corrections are achieved using an adopted spectral index, hereafter referred to as $\\alpha$ where the flux, $S_{\\nu} \\propto \\nu^{-\\alpha}$. In most previous work, a constant or median value of $\\alpha$ is adopted for simple visual comparisons. This is often assumed to have a median value of $\\alpha \\sim 0.7$. However, the spectral index is known to vary as a function of radio flux. This is because the radio source counts are dominated by different populations at different fluxes which do not necessarily have the same nature, age, or redshift. Hence, assuming a constant value for $\\alpha$ over all frequency and flux ranges is not necessarily correct. For the frequency corrections used in this paper, we allow $\\alpha$ to vary as a function of flux as detailed in \\citet{Windhorst_2003}. We then use the \\citet{Windhorst_2003} relationships between the median $\\alpha$, frequency and radio flux to perform our corrections. This adjustment should bring all surveys within our 7 frequency bands into frequency alignment. \n\nFigure\\,\\ref{fig:fig1} shows the median $\\alpha$ at high (upper panel) and low (lower panel) frequencies as a function of radio flux and we fit these data with cubic polynomials to determine the optimum value of $\\alpha$ for each flux point. The spectral index $\\alpha$ was measured between 0.41 or 1.4 GHz and 4.86 or 5.0 GHz. Many of the median $\\alpha$ values summarized in \\citet{Windhorst_2003} were originally measured by \\citet{Kapahi_1986}, who used sources identified at 0.41 and 5.0 GHz to derive a median spectral index between the two frequencies. Care was taken to only use surveys that had the FWHM of the survey frequency with the highest resolution convolved to become equal to the FWHM of the survey frequency with the lower resolution, so as to not bias the spectral index measurements and their medians that were made as a function of flux density. We also made sure to use only sources above the 5-$\\sigma$ detection limits at {\\it both} survey frequencies in order not to bias the median spectral index derived as a function of radio flux. The flux scales of the actual surveys were then converted to two fiducial frequencies, i.e., either 610 MHz or 4.86 GHz, whichever was closest to the observed frequency of each survey. This was done using the relationships derived in \\citet{Windhorst_2003}, which we have refined in Figure\\,\\ref{fig:fig1}. Of course, this process requires the actual relationship to be known before it can be used to convert the flux scales at adjacent frequencies to the fiducial frequency, and so we iterated the best fits in Figure\\,\\ref{fig:fig1} until convergence was reached, which happened quickly within 1-2 iterations. For surveys not done at 4.86 GHz or 610 MHz, the relationship closer in frequency to a data set was used to perform these frequency corrections. Figure\\,\\ref{fig:fig1} highlights that using a constant spectral index to bring radio surveys at slightly different frequencies onto the same scale can be inaccurate.\n\nA cubic polynomial is fit to each $\\alpha$ vs $\\log_{10}$ flux relationship in Figure \\ref{fig:fig1}. The fits are generated via Monte-Carlo Markov-Chain analysis using the \\textsc{LaplacesDemon} software package. \n\n\n\n\\begin{figure}\n\\vspace*{0.000cm}\n\\includegraphics[width=\\columnwidth]{Alpha_vs_Logflux_06cm.png}\n\\includegraphics[width=\\columnwidth]{Alpha_vs_Logflux_50cm.png}\n\n\\vspace*{0.00cm}\n\\caption{(upper Fig. 1a) The median spectral index $\\alpha$ versus flux relationship at a wavelength of 6cm or frequency of ~5\\,GHz. (Lower Fig. 1b) The median $\\alpha$ versus flux relationship at a wavelength of 50 cm or frequency of ~610\\,MHz. A sample from the final 50 iterations of the MCMC analysis are shown in blue and the best-fit is shown in red.}\n\\label{fig:fig1}\n\\end{figure}\n\n\n\\vspace*{0.00cm}\n\n\\begin{figure*}\n\\vspace*{0.000cm}\n\\includegraphics[width=16cm, height=19.5cm]{panel_3000.png}\n\n\\caption{(upper) A collection of differential 3.0\\,GHz radio source counts. The error bars on most points are too small to display in this format, however, they are still present on all points. (centre) A collection of normalized 3.0\\,GHz differential radio source counts spanning $\\sim$7 orders of magnitude. (lower) The same collection of differential 3.0\\,GHz radio source counts showing the radio EBL. The counts are normalized to $\\log_{10}(N\/S_v^{-2.0}$) to highlight contributions to the radio EBL. The right peak centered near $\\sim10^{2.5}$\\,mJy displays the contribution of primarily AGN and the left peak beginning to form near $\\sim10^{-2}$\\,mJy. The dark black and yellow line is the best spline fit to the data while the light blue lines are spline fits which were allowed to vary within the error bars of each data point and the error floor added to each point. The yellow portion of the best fit line outlines the first peak of the radio EBL contribution dominated by AGN. The convergence at both ends allows for a convergent measurement of the discrete radio EBL. The faintest three points from \n\\citet{Vernstrom_2014} \nprovide this convergence despite being a statistical analysis of the noise, which is reflected by the large error bars and uncertainty in the converging slope. Points with transparency are shown for completeness but are left out of the spline fit for a variety of reasons detailed in Sections \\ref{Section_2.2} and \\ref{Section_2.3}. The same figures for the other frequencies are shown in the appendix.\n\\label{fig:fig2}}\n\\end{figure*}\n\n\n\n\n\\subsection{Problematic data sets}\n\\label{Section_2.2}\nDue to the significant amount of data used --- extending back to the 1970s and using a range of instruments, frequency corrections (where applicable), areas, resolutions, and survey regions --- inconsistencies are to be expected. During our review of the data only two entire data sets were felt to be sufficiently anomalous to be omitted from our final analysis but are included in the compendium for completeness. At 150\\,MHz, the data of \\citet{McGilchrist_1990} is not considered in our fitting, since the newer data set from the LOFAR Two-Meter Sky Survey of \\citet{Mandal_2021} covers the same range over a larger area, while being fully consistent with other available data. At 3\\,GHz, the data of \\citet{Smolcic_2017} is also shown, but again not included in the fitting. This relatively recent VLA survey was conducted in the COSMOS field, and the inconsistency between their data and those of \\citet{Van_der_Vlugt_2021} is addressed in the latter paper, where the source counts of \\citet{Van_der_Vlugt_2021} are consistently a factor of ~1.4 above those of \\citet{Smolcic_2017}. This offset is discussed by \\citet{Van_der_Vlugt_2021} and partially ascribed to the impact of resolution bias as the Van der Vlugt team used a higher resolution ($0.2^{''}$ versus $0.75^{''}$) and hence less prone to missing sources due to resolution bias. The other contribution to the offset explored by \\citet{Van_der_Vlugt_2021} could be due in part to cosmic variance, though this is not claimed to be the dominant effect. In this case the \\citet{Smolcic_2017} data covered a larger area than \\citet{Van_der_Vlugt_2021}, the latter of which surveyed an area completely contained within the former. For a more complete discussion and analysis of the two data sets see \\citet{Van_der_Vlugt_2021}. Given the incompatibility of these data sets we elected to adopt the \\citet{Van_der_Vlugt_2021} data purely on the basis of its greater consistency with two additional independent data sets (see Figure \\ref{fig:fig2}).\n\n\\subsection{Problematic data points}\n\\label{Section_2.3}\nFor many of the data sets, assembled data points are often included that suggest incompleteness at the faint end --- i.e., the source counts flatten or turn down compared to a best fit extrapolation of the brighter survey points that are very likely complete. The faintest flux bins in each survey have typically 50--100 sources per bin, and therefore RMS errors of the order of 14--10\\%, respectively in each of the faintest bins. As a working definition, therefore, we would consider faint object counts to have become visibly incomplete if the faintest bins in each survey are higher or lower than the (power-law) extrapolation from brighter bins by twice that amount, or 28--20\\%, respectively. Such faintest flux bins have subsequently been flagged as potentially incomplete in that survey, and are therefore not used in our fits of the sources counts.\nFor each dataset we need to determine the limits over which the data is credible. In most cases, the majority of a data set is used, with some very bright or very faint data points flagged as incomplete or of insufficient accuracy. At low flux limits this is due to the survey sensitivity and at the bright-end sampling exceptionally small volumes and behaving in a non-Euclidean manner. An example of incomplete points can be seen at 150\\,MHz (see Figure\\,\\ref{fig:A1}, middle panel), normalized to the Euclidean expectation. In data from the GLEAM survey \\citep{Hurley-Walker_2016}, for example, the faintest data points are incomplete and clearly fall below the other data points, and hence flagged as incomplete. The reason for data to be omitted is also highlighted at 610\\,MHz, where the brightest three points are shown (see Figure\\,\\ref{fig:A1}) but again flagged since a turn upwards in the counts at bright fluxes is nonphysical (and likely indicative of a local group). \n\nIn the very local Universe we expect the source counts to follow the Euclidean expectation, i.e., neither any curvature nor the expansion has a noticeable impact over short distances. Hence we also incorporate a prior in our fitting by adding a very bright \"anchor-point\" representing a Euclidean extrapolation of the brightest data points. In this case we generate an anchor point at $10^{6}$\\,mJy and this is shown as the green circle in Figure\\,\\ref{fig:fig2}. At the best studied frequencies, the anchor point is not really necessary but in some of the modern frequencies where all sky surveys have not been conducted it is necessary to ensure a physical spline fit to the source count data (see for example the 8.4\\,GHz where without the anchor point a simple spline fit is going to trend downward rather than stay flat (see middle panel of Figure \\ref{fig:A2}). To be consistent, we adopt an anchor point at all frequencies. We also show the frequency-corrected source count from 1.4\\,GHz at 22.26\\,Jy as confirmation of our adopted anchor point, and to provide further constraint on the known bright-end behaviour of the source count fits. A fixed value for $\\alpha$ of 0.4 was chosen for this purpose.\n\n\\subsection{Towards homogeneous errors}\n'The data gleaned from the literature often provide errors derived in different ways. Hence some effort is required to homogenise the errors before any weighted fitting of the combined source count data. For a meaningful statistical fit it is also necessary to ensure the errors are realistic. To address this we calculate new errors for all datasets using the combination of a conservative error floor estimate to represent systematic uncertainties, and the Poisson expectation from the source-counts. For each frequency we increase the error-floor until the combined datasets are statistically consistent. This typically resulted in an error floor varying from 3\\% to 20\\%. To make the errors fully consistent with 68\\% rms, most surveys in Table~\\ref{tab:Table_2} needed a 3--6\\% in the absolute flux scale, or sometimes 10\\%, to make the survey consistent with the other surveys in the same flux range. Some of the 8.4 GHz and 325 MHz surveys needed as much as 20\\%. For the 8.4 GHz surveys, this is likely due to the more uncertain absolute flux-scale zero point at the highest interferometer frequencies, but at 325 MHz, it could also be due to instrumental confusion being a more important issue in the older interferometer images. For this reason, we checked in Table~\\ref{tab:inputdata} that none of the surveys had their faintest source count data points approach the confusion limit. To calculate this, we added two columns to Table \\ref{tab:inputdata}, namely the FWHM of each survey's synthesized beam (in arcsec), and in the last column we list the formal 5$\\sigma$ point source detection limit for each survey as well as the resulting confusion source density. The latter was computed as follows: For each survey, we used the upper panel of Figure \\ref{fig:fig2} to calculate the \\textit{integrated} source density down to the 5$\\sigma$ point source detection limits above. Next, we used each survey's synthesized beam FWHM-value --- converted to units of square degrees --- to calculate the number of independent beams available for each detected source down to that surveys' 5$\\sigma$ completeness limit. Depending on the actual slope of the source counts, typical levels quoted for the confusion limit are 25--50 independent beams per detected source \\citep[see, e.g.,][and references therein]{Kramer_2022}. The inverse number of the confusion source density calculated this way is shown on the top line in the last column of Table~\\ref{tab:inputdata}, and the 5$\\sigma$ point source detection limit is shown on the second line for each survey. In general, all surveys are well below the confusion limit of 0.02--0.04 detected \nsources per beam. \n\nAn example can be seen at 325 MHz in Figure \\ref{fig:A1} where there are small systematic offsets between the data sets in which an error floor of 20\\% is needed to bring the contributing datasets into statistical agreement prior to variance weighted spline-fitting. The final values for each error floor for all frequencies are given in Table \\ref{tab:Table_2}. The relatively large errors at 5 and 8.4 GHz are due to the very small areas covered by the primary beams of radio interferometers at these frequencies, so that these high frequency surveys likely have a significant Cosmic Variance (CV) component given the few areas covered. The cause of the larger errors at the low frequencies of 325-610 MHz can be one or more of the following, the survey's primary beams here are much larger, so CV is likely much smaller, but the larger synthesized beams may lead to more source confusion, and the lower frequency may include a significant fraction of sources whose spectral shape is not a simple power-law due to, e.g., synchrotron self-absorption in compact radio sources. We leave this topic to future study, and will adopt the errors in Table \\ref{tab:Table_2} here as needed for a consistent error-budget, whichever their cause. \n\n\\begin{table}\n\n\\caption{Imposed error floors for each frequency used. The error floors were added in quadrature to the existing errors.}\n\\begin{tabular}{cc}\n\\cline{1-2}\nFrequency (MHz) & Imposed Error Floor (\\%) \\\\ \\hline\n150 \\rlap{\\phantom{$\\strut^b$}} & 6 \\\\ \n325 \\rlap{\\phantom{$\\strut^b$}} & 20 \\\\ \n610 \\rlap{\\phantom{$\\strut^b$}} & 10 \\\\ \n1400 \\rlap{\\phantom{$\\strut^b$}} & 3 \\\\ \n3000 \\rlap{\\phantom{$\\strut^b$}} & 3 \\\\ \n5000 \\rlap{\\phantom{$\\strut^b$}} & 10 \\\\ \n8400 \\rlap{\\phantom{$\\strut^b$}} & 20 \\\\ \\hline\n\\end{tabular}\n\\label{tab:Table_2}\n\\end{table}\n\n\n\\subsection{The Windhorst (2003) compendium}\nA summary table for the data compendium originally assembled by \\citet{Windhorst_2003} is shown in the Appendix. The data was assembled to provide a summary of the current state of radio source counts at three frequencies as of 2003 \\citep[see e.g.][]{Windhorst_1985, Windhorst_1990, Windhorst_1993, Windhorst_2003}, and as an aid to predict the extremely faint source counts in the microjansky and nanojansky regime, which may be observed by the SKA \\citet{Hopkins_2000}. Despite its age, this compendium provides the majority of the 5\\,GHz and 8.4\\,GHz source counts, and still includes the faintest 1.4\\,GHz source counts. \n\n\n\n\\section{The integrated source count measurement and the radio EBL}\nThe source counts, Euclidean normalised source counts and the contribution to the radio EBL at 3.0\\,GHz are displayed in Figure\\,2 as upper, middle and lower panels respectively. Focusing on the lower panel which shows the differential contribution to the radio EBL, one sees two peaks at $\\sim$$10^{2}$\\,mJy and $\\sim$$10^{-2}$\\,mJy. These represent the contribution to the 3\\,GHz EBL dominated by AGN and SFG respectively. Below we describe how we now fit these data via simple spline fitting and derive our integrated EBL measurements for each frequency.\n\n\\subsection{Spline fitting the source counts}\nFor each frequency we fit a smooth spline of order 7 adopting a weighting for each data point inversely proportional to the adopted $1\\sigma$ error. Hence points with larger error bars have a smaller weight than those with smaller error bars. For our anchor points we adopt an error of 10\\% and we also include a transposed bright source count data point from 1.4\\,GHz modified to the target frequency assuming a spectral index of 0.4. Note that the inclusion of an error-floor at each frequency assures that the data is statically consistent. We note that without the adoption of this error floor the spline behaviour becomes erratic as the errors vary significantly between surveys. This suggests that unknown systematics are indeed at work at the level indicated in Table \\ref{tab:Table_2}. These are most likely caused by calibration issues between surveys, receivers and teams.\n\n\nSpline fitting alone serves as a useful tool for integrating the data over the range where data exists, and integrating over this range allows us to obtain a lower limit to the radio EBL. The splines can also be extrapolated and if the source counts have converged, this is reasonable as the flux in the extrapolation is likely to be small (and Monte Carlo simulations will provide robust errors). Hence for all data sets we see that the inclusion of the anchor point and transposed 1.4\\,GHz results in bright-end convergence in a manner which is consistent with our physical expectation. In all cases we can also identify a clear dip between the AGN and SFG population. \n\n\\subsection{Determination of the EBL and radio temperature}\nThe total intensity of the radio EBL is determined by integrating the spline fit shown in Figure \\ref{fig:fig2} along the x axis. This yields a result in terms of \\textit{nW}\/m$^{2}$\/sr, given by the formula:\n\\begin{equation}\nI=\\ln(10) \\times \\nu \\times 10^{-26}\\frac{W}{Jy}\\times 10^9\\frac{nW}{W} \\sum (Sp_i(y)+2.0 Sp_i(x)-2.0 \\log(10)) {\\frac{Jy}{sr}}\n\\end{equation}\nfor the spline function fit to the data, labeled Sp. The spline does not have a closed form solution so the integral is shown as a sum, with the necessary unit conversions out front. The results are shown in Figure \\ref{fig:fig3}.\n\\begin{figure}\n\\vspace*{0.000cm}\n\\includegraphics[width=\\columnwidth]{ebl_zoomed.png}\n\\caption{A magnified view of the region of interest, highlighting the upper and lower bounds of the radio EBL measurements from ARCADE-2 \\citet{Fixsen_2011}. The \\citet{Vernstrom_2011} data is shown with the $1-\\sigma$ errors as the data points and the range between their minimum\/maximum extrapolation as the solid lines. The lower limits are derived from the source counts to the lowest intensity of the surveys, and are shown as the horizontal line at the bottom of the arrow. The convergent measurement, where the integration includes extrapolation of the spline outside the range of the data, is shown as the diamond at 3\\,GHz or $10^5$ microns. The error bar for this convergent measurement is too small to meaningfully display, but the $1\\sigma$ uncertainty is given in Table \\ref{tab:Table_3}}\n\\label{fig:fig3}\n\\end{figure}\n\nThe intensity of the EBL given by Equation 2 can be converted to a more conventional brightness temperature via the equation below\n\\begin{equation}\nI=2T \\times k_{b} \/ (\\lambda^2 \\times \\nu)\n\\label{eqn:eqn2}\n\\end{equation}\n\nThe integral of our limiting fluxes as well as for the extrapolations are shown in Table\\,\\ref{tab:Table_3}, noting that the extrapolation only converges at 3\\,GHz. Errors are determined by taking the 83rd and 17th percentile limits from the series of spline fits for the upper and lower 1$\\sigma$ errors shown in the table and figures, respectively.\n\nFigure\\,\\ref{fig:fig3} shows our measurement of lower limits (maroon diamond or arrows), compared to the previous study of source counts by \\citet{Vernstrom_2014} (cyan measurements) and to the direct measurements from ARCADE-2 \\citep{Fixsen_2011} (black arrows). In general our lower limits place more stringent constraints on the EBL and help to close the gap somewhat between the previous EBL range and the ARCADE-2 data. However, where we have a clear measurement at 3\\,GHz we see that our integrated source counts return an EBL value a factor of $\\approx 4\\times$ less than the ARCADE-2 measurement. \\citet{Seiffert_2011} argue that the discrepancy could be due to any of: underestimated galactic foreground, unaccounted for contribution of background radio sources or some combination of both, though our results show that an undetected population of discrete background sources can not contribute significantly to the discrepancy.\n\n\\subsection{Determination of Empirical AGN and SFG Estimates of the CRB}\n\n\n\nAt this point, and noting that the brighter AGN hump is bounded at all frequencies, we can determine the total AGN-dominated EBL without the need for models as the integral of the spline-fit from $10^5$\\,mJy to the crossover point between the AGN and SFG populations. For example, at 3\\,GHz the crossover point occurs at just below 1\\,mJy, see Figure \\ref{fig:fig2}. This crossover point was first identified at a similar flux level at 1.4 GHz by \\citet{Windhorst_1985}. We can then define the SFG-dominated EBL contribution as a lower limit by integrating from the crossover point to the faintest data point. The crossover point is determined by finding the flux where the slope between the two peaks changes sign from negative to positive as we move along the spline from bright to faint fluxes. We note that only at 3\\,GHz is our estimate of the contribution of the SFG's to the total EBL convergent, due to the noise-analysis data of \\citet{Vernstrom_2014}. In all other bands we report lower limits in Table\\,\\ref{tab:Table_3}. This analysis provides a model independent estimate of the CRB for the two populations, though it is not possible to separate the two populations with source counts alone as both span a wide range in flux. Thus we turn to model source counts in \\ref{Section 4} to better understand the CRB.\n\n\\begin{table*}\n\\caption{Intensity of integrated source counts across the data and from and to the estimated midpoint to derive lower limits for the AGN and SFG population EBL.}\n\\begin{center}\n\\label{tab:Table_3}\n\\begin{tabular}{ccccccc} \\hline\nFrequency & Lower & Integration & Integration & Integration & Integration \\\\ \n& limit & to lower limit & to zero & to Midpoint (AGN) & From Midpoint (SFG) & Estimated Midpoint \n\\\\\\,MHz & $\\mu$Jy & nW m$^{-2}$ sr$^{-1}$ & nW m$^{-2}$ sr$^{-1}$ & nW m$^{-2}$ sr$^{-1}$ & Value $\\log_{10}($mJy$)$\\\\ \\hline\n150& 220 & $3.55^{+0.01}_{-0.01}\\times10^{-5}$ & N\/A & $2.83_{-0.01}^{+0.01} \\times 10^{-5}$ & $7.21_{-0.02}^{+0.02} \\times 10^{-6}$ & 0.64 \\\\ \n325& 242 & $4.32^{+0.06}_{-0.05}\\times10^{-5}$ & N\/A & $3.81_{-0.06}^{+0.06} \\times 10^{-5}$ & $5.13_{-0.07}^{+0.06} \\times 10^{-6}$ & 0.34 \\\\\n610& 56 & $4.42^{+0.06}_{-0.06}\\times10^{-5}$ & N\/A & $3.21_{-0.05}^{+0.06} \\times 10^{-5}$ & $1.21_{-0.01}^{+0.01} \\times 10^{-5}$ & 0.28 \\\\\n1400& 12.4 & $7.74^{+0.03}_{-0.03}\\times10^{-5}$ & N\/A & $5.22_{-0.02}^{+0.02} \\times 10^{-5}$ & $2.52_{-0.02}^{+0.02} \\times 10^{-5}$ & -0.02 \\\\ \n3000& 0.05$^\\dagger$ & $1.27^{+0.02}_{-0.02}\\times10^{-4}$ & $1.31_{-0.04}^{+0.04} \\times 10^{-4}$ & $6.74_{-0.02}^{+0.02} \\times 10^{-5}$ & $6.29_{-0.41}^{+0.43} \\times 10^{-5}$ & -0.11 \\\\ \n5000 & 18.19 & $1.04^{+0.03}_{-0.03}\\times10^{-4}$ & N\/A & $8.50_{-0.13}^{+0.12} \\times 10^{-5}$ & $1.84_{-0.12}^{+0.13} \\times 10^{-5}$ & -0.68 \\\\ \n8400 & 13.4 & $1.23^{+0.04}_{-0.05}\\times10^{-4}$ & N\/A & $1.03_{-0.05}^{+0.04} \\times 10^{-4}$ & $2.54_{-0.16}^{+0.15} \\times 10^{-5}$ & -0.74 \\\\ \\hline\n\\end{tabular}\n\n\\end{center}\n\n$^\\dagger$ The 3000\\,MHz faintest point is from \\citet{Vernstrom_2014} $P(D)$ noise analysis. The faintest direct source count measurement at 3000\\,MHz is at 3.72\\,$\\mu$Jy. All other data sets have a faintest direct source count measurement. Column 3 contains the integral from the $10^5\\,\\mu$Jy point to the faintest data point, and column 5 is the integral of the AGN-dominated sources from the $10^5\\,\\mu$Jy to the estimated midpoint value. The integration of the AGN-dominated peak of the radio EBL contribution is discussed in Section 4.1.\n\\end{table*}\n\n\n\\begin{table*}\n\\renewcommand\\thetable{3B}\n\\caption{The EBL values from Table \\ref{tab:Table_3} converted to a brightness temperature in mK, the units used in \\citet{Vernstrom_2011}.}\n\\begin{center}\n\n\\begin{tabular}{ccccccc} \\hline\n Frequency & Integration & Integration & Integration & Integration \\\\ \n & to lower limit & to zero & to Midpoint (AGN) & From Midpoint (SFG) \\\\\\, MHz & mK & mK & mK & mK \\\\ \\hline\n150 & $3.42^{+0.01}_{-0.01}\\times10^{4}$ & N\/A & $2.73^{+0.01}_{-0.01}\\times10^{4}$ & $6.95^{+0.02}_{-0.02}\\times10^{4}$ \\\\ \n325 & $4.10^{+0.06}_{-0.05}\\times10^{3}$ & N\/A & $3.61^{+0.06}_{-0.06}\\times10^{3}$ & $4.86^{+0.01}_{-0.01}\\times10^{3}$ \\\\\n610 & $6.34^{+0.09}_{-0.09}\\times10^{2}$ & N\/A & $4.60^{+0.09}_{-0.07}\\times10^{2}$ & $1.74^{+0.09}_{-0.10}\\times10^{2}$ \\\\\n1400 & $91.8^{+0.04}_{-0.04}$ & N\/A & $62.0^{+0.24}_{-0.24}$ & $30.0^{+0.24}_{-0.24}$ \\\\ \n3000 & $15.3^{+0.24}_{-0.24}$ & $15.8^{+0.48}_{-0.48}$ & $8.14^{+0.02}_{-0.02}$ & $7.58^{+0.52}_{-0.49}$ \\\\ \n5000 & $2.71^{+0.08}_{-0.08}$ & N\/A & $2.21^{+0.03}_{-0.03}$ & $0.48^{+0.03}_{-0.03}$ \\\\ \n8400 & $0.675^{+0.02}_{-0.03}$ & N\/A & $0.566^{+0.02}_{-0.03}$ & $0.139^{+0.01}_{-0.01}$ \\\\ \\hline\n\\end{tabular}\n\n\\end{center}\n\n\\end{table*}\n\n\n\n\n\\section{Using SHARK to extract the total EBL}\n\\label{Section 4}\nTo obtain a more robust and physically motivated estimate for both the AGN and SFG contributions to the EBL, we need to incorporate a model. In the past this has typically been done by backward modelling local radio luminosity functions for SFGs and adopting some evolution to determine the predicted faint source counts. These are tweaked to match the flux range where data exists and then used to extrapolate to recover the flux beyond the survey limits. Here we adopt a slightly different strategy by using a forward semi-analytic model, built on N-body simulations, where we use the form (shape) of the predicted source counts to help integrate our SFG population.\n\n\nWe elect to use the {\\sc Shark} semi-analytic model of galaxy formation \\citep{Lagos_2018} which is able to generate self-consistent predictions from Ultraviolet to radio continuum emission due to star formation. {\\sc Shark} models a range of baryon physics that are thought to play an important role in the formation and evolution of galaxies, including: (i) the collapse and merging of dark matter (DM) halos; (ii) the accretion of gas onto halos, which is modulated by the DM accretion rate; (iii) the shock heating and radiative cooling of gas inside DM halos, leading to the formation of galactic discs via conservation of specific angular momentum of the cooling gas; (iii) star formation in galaxy discs; (iv) stellar feedback from the evolving stellar populations; (v) chemical enrichment of stars and gas; (vi) the growth via gas accretion and merging of supermassive black holes; (vii) heating by active galactic nuclei (AGN); (viii) photoionization of the intergalactic medium; (ix) galaxy mergers driven by dynamical friction within common DM halos which can trigger bursts of SF and the formation and\/or growth of spheroids; (x) collapse of globally unstable disks that also lead to the bursts of SF and the formation and\/or growth of bulges. For this paper, we use the default model presented in \\citet{Lagos_2018}.\n\n\\citet{Lagos_2019} and \\citet{Lagos_2020} presented the FUV-to-FIR multi-wavelength predictions of {\\sc Shark}, by combining the model with the generative SED model {\\sc ProSpect} \\citep{Robotham_2020}, and using a novel method to compute dust attenuation parameters based on the radiative transfer analysis of the EAGLE hydrodynamical simulations of \\citet{Trayford_2020}. The model was shown to produce excellent agreement with the observed number counts from the FUV to the FIR \\citep{Lagos_2019}, and the redshift distribution of FIR-selected sources \\citep{Lagos_2020}. Here, we use the SED model extension of Hansen et al. (in preparation), which models the radio continuum emission of {\\sc Shark} galaxies. In particular, we use the predictions based on the implementation of the empirical \\citet{Dale_2014} model, which assumes a FIR-radio correlation, and has as free parameters, the ratio of free-free-to-synchrotron emission (set to $0.1$), and the frequency dependence of the free-free and synchrotron SED emission (which are set to be $\\nu^{-0.1}$ and $\\nu^{-0.8}$, respectively); these are the values used by \\citet{Dale_2014}. To compare with our observed radio wavelength number counts, we use the lightcone introduced in \\citet{Lagos_2019}, Section 5, which has an area of $107$~deg$^2$, and includes all galaxies with a dummy magnitude, computed assuming a stellar mass-to-light ratio of $1$, $<32$ and at $0\\le z\\le 8$. The method used to construct lightcones is described in \\citet{Chauhan_2019}. In this version of the SHARK SEDs, we do not include AGN radio emission, but see \\citet{Amarantidis_2019} for a comparison of the SHARK 1.4\\,GHz luminosity function of AGN with observations over a wide redshift range.\nThe use of the SHARK model lightcones and their derived source counts for the SFG population only at all seven frequencies allows us to obtain a model radio EBL contribution. However, while the models come reasonably close they do not precisely fit the observed source counts with some small amplitude offsets. In order to better match the data and models, we derive a scaling factor which simply shifts the entire model source count fit up or down by a constant factor to fit to the available data. This scaling factor is obtained from a chi-squared minimization between the relevant data points that sample the SFG peak and the model. The fit is weighted by the inverse error of each data point, i.e., we minimise $\\sum_i \\frac{(D_i-M_i)^2}{\\sigma_i}$ where $D$ represents the data point, $M$ the model and $1\\sigma$ the error on the data-point. Here we need to be careful to only include data points dominated by star-formation and not contaminated by significant AGN flux. Hence we only use data points with fluxes below where the the minimum between the two peaks occurs and down to the flux limit. The scaling we derive to the SHARK number counts is small, only reaching $13\\%$ in either direction. SHARK is prone to cosmic variance effects and we sample a simulated volume of 310 $Mpc^3$. The source fluxes are not prone to random error, but have systematic errors associated with the choice of model input parameters. Thus, the correction to match observed data can be done in either direction. Table~\\ref{tab:Table_4} shows the upper and low flux limits used to determine the scaling factors in each frequency. We derive errors for this process by letting the data points used in this minimization vary within their allowed errors which returns a different scale factor each time from which we pick a best fit, upper, and lower limit from the median, $83^{rd}$, and $17^{th}$ percentile of the resulting distribution. \n\n\\begin{table*}\n\n\\caption{Derived factors by which the SHARK model EBL curves were scaled to match the data. The best scale factor was determined to be the median of the distribution of scale factor values, while the upper and lower limits are the 83rd and 17th percentiles, respectively.}\n\\begin{tabular}{ccccc}\n\\cline{1-5}\nFrequency (MHz) & Flux range & SHARK Model & Scale Factor & Scale Factor \\\\ (MHz) & ($\\log_{10}($mJy$)$) & Scaling Factor & Upper Limit & Lower Limit \\\\ \\hline\n150 \\rlap{\\phantom{$\\strut^b$}} & $<0.4$ & 0.906 & 0.912 & 0.900 \\\\ \n325 \\rlap{\\phantom{$\\strut^b$}} & $<0.4$ & 0.876 & 0.984 & 0.857 \\\\ \n610 \\rlap{\\phantom{$\\strut^b$}} & $<0.1$ & 0.943 & 0.951 & 0.934 \\\\ \n1400 \\rlap{\\phantom{$\\strut^b$}} & $<-1$ & 0.998 & 1.010 & 0.985 \\\\ \n3000 \\rlap{\\phantom{$\\strut^b$}} & $<-1$ & 0.974 & 0.987 & 0.960 \\\\ \n5000 \\rlap{\\phantom{$\\strut^b$}} & $<-1$ & 1.034 & 1.087 & 0.983 \\\\ \n8400 \\rlap{\\phantom{$\\strut^b$}} & $<-1$ & 1.136 & 1.208 & 1.044 \\\\ \\hline\n\\end{tabular}\n\\label{tab:Table_4}\n\\end{table*}\n\n\n\\subsection{Sub-dividing the EBL into the AGN and SFG contributions}\nUsing the data normalised SHARK model predictions, we are now able to subtract the SFG population only from the original source count data. In doing so we obtain a representation of the AGN source counts only and hence can derive the separable contributions of the SFG and AGN to the total EBL. Note that although a model has been used, the AGN source counts are minimally dependent on the SFG model because the point which overlaps with the AGN peak is generally Euclidean and hence predictable in behaviour. Figure\\,\\ref{fig:fig4} illustrates the process at 3\\,GHz. This shows the original combined source count data as green diamonds and the normalised SHARK prediction for the SFG as the magenta line. In order to generate a model value at the location of every data point we fit the model with a spline and then subtract this value from the counts. What remains is the data associated with AGN (red data points), which we can now subtract from the original source counts to get the data for the SFG only (blue data points). As expected, the right peak of the radio EBL seen in Figure \\ref{fig:fig4} is almost entirely dominated by the AGN population, and the left peak is dominated by the much fainter SFG population. It is also reassuring to see that the blue points scatter around the SHARK model even over the flux ranges not used in the deriving the scaling factors.\n\nWe obtain error measurements by 1000 samplings of the scale factors assuming a Poisson distribution as represented by the values shown in Table \\ref{tab:Table_4}, and also allowing all data points to vary independently according to their assigned errors. Hence for each iteration we obtain a randomly sampled measurement of the SFG contribution from the renormalised SHARK model counts alone, and the AGN contribution from the original data with the SFG prediction removed. Using 1000 different spline models each with a different scale factor and data adjusted according to the errors by a normal distribution, we obtain a set of 1000 EBL measurements for both the AGN and SFG populations, and their summed total as well. While MCMC fitting works well for the $\\alpha$ vs flux relationship, the complexity of the data and the lack of an analytical solution means that MCMC analysis is not viable for fitting the EBL distribution shown in Figure \\ref{fig:fig4}.\n\nIn all 3 cases, we use the same method of taking the median, $83^{rd}$, and $17^{th}$ percentiles to obtain a measurement, upper, and lower limit, which are displayed in Figure \\ref{fig:fig5} with their associated errors (blue points represent the SFG, red points the AGN and green squares the total EBL). Also shown are the ARCADE-2 direct measurements of the total radio EBL (black line and arrows) and our previous total lower limits from spline fitting only (magenta line arrows and diamonds). In these figures the previous data from \\citet{Vernstrom_2011} shown on Figure\\,\\ref{fig:fig3} is left off for clarity. Figure\\,\\ref{fig:fig5} allows us to compare our model dependent extrapolation of the total discrete radio EBL at all frequencies, and compare to our lower limits and convergent measurement at 3\\,GHz. At 3\\,GHz, the sum of the two populations agrees extremely well with the empirical measurement to within $\\approx7\\%$. This is consistent with our derived error from our spline fit of $\\pm 9$ \\%. Given the different parameters required to generate the SHARK model and large errors on the convergent source counts at this frequency, the consistency of this adds credence to our methodology that separates the discrete EBL into its two primary components: SFG and AGN. The radio EBL contribution for each population, errors and their combination is shown in Table \\,\\ref{tab:Table_5}.\n\n\\begin{figure*}\n\\includegraphics[width=\\textwidth]{3000_EBL_predictions.png}\n\\caption{A breakdown of the 3\\,GHz source counts into its respective populations. The 1.4\\,GHz anchor point and Euclidean extension are shown and used in the fit for the predicted AGN EBL, and the SHARK model is given a constraining point shown at $10^3$\\,mJy to impose Euclidean behavior. \n\\label{fig:fig4}}\n\\end{figure*}\n\n\\begin{figure*}\n\\includegraphics[width=\\textwidth]{model_ebl_zoomed.png}\n\\caption{ The model-derived estimates of the source counts shown alongside the lower limit estimates and convergent measurement at 3\\,GHz. The EBL is broken down into its two dominant components and the sum of the two model-derived components. A dashed line joining the original lower limits is displayed, and indicates that in all frequencies the lower limit measurements still contain contributions from both populations, and that at 3\\,GHz, their sum is consistent with the only convergent measurement.\n\\label{fig:fig5}}\n\\end{figure*}\n\n\\begin{table*}\n\\caption{Summary of integrated radio source count data at different frequencies. Values here are quoted with upper and lower error bars, and the original lower limits are re-stated for comparison to the model-derived values. All EBL values quoted are given in units of nW m$^{-2}$ sr$^{-1}$.}\n\\begin{tabular}{cccccc}\n\\hline\nFrequency (MHz) & SHARK Model EBL & AGN-Fit EBL & Total Sum & Original Lower Limit \\\\ \\hline\n150 \\rlap{\\phantom{$\\strut^b$}} & $3.02^{+0.06}_{-0.06}\\times10^{-5}$ & $2.68^{+0.02}_{-0.03}\\times10^{-5}$ & $5.71^{+0.06}_{-0.06}\\times10^{-5}$ & $3.53^{+0.03}_{-0.03}\\times10^{-5}$ \\\\ \n325 \\rlap{\\phantom{$\\strut^b$}} & $3.23^{+0.09}_{-0.09}\\times10^{-5}$ & $3.55^{+0.10}_{-0.10}\\times10^{-5}$ & $6.74^{+0.13}_{-0.14}\\times10^{-5}$ & $4.23^{+0.24}_{-0.21}\\times10^{-5}$ \\\\ \n610 \\rlap{\\phantom{$\\strut^b$}} & $4.03^{+0.09}_{-0.08}\\times10^{-5}$ & $2.98^{+0.04}_{-0.04}\\times10^{-5}$ & $7.01^{+0.09}_{-0.08}\\times10^{-5}$ & $4.34^{+0.04}_{-0.04}\\times10^{-5}$ \\\\ \n1400 \\rlap{\\phantom{$\\strut^b$}} & $5.39^{+0.13}_{-0.12}\\times10^{-5}$ & $5.37^{+0.07}_{-0.07}\\times10^{-5}$ & $1.08^{+0.01}_{-0.01}\\times10^{-4}$ & $7.3^{+0.06}_{-0.07}\\times10^{-5}$ \\\\ \n3000 \\rlap{\\phantom{$\\strut^b$}} & $7.07^{+0.17}_{-0.16}\\times10^{-5}$ & $6.51^{+0.13}_{-0.67}\\times10^{-5}$ & $1.36^{+0.02}_{-0.02}\\times10^{-4}$ & $1.32^{+0.05}_{-0.04}\\times10^{-4}$ \\\\ \n5000 \\rlap{\\phantom{$\\strut^b$}} & $8.21^{+0.42}_{-0.42}\\times10^{-5}$ & $1.05^{+0.16}_{-0.14}\\times10^{-4}$ & $1.86^{+0.21}_{-0.15}\\times10^{-4}$ & $1.03^{+0.04}_{-0.04}\\times10^{-4}$ \\\\ \n8400 \\rlap{\\phantom{$\\strut^b$}} & $1.11^{+0.08}_{-0.08}\\times10^{-4}$ & $1.03^{+2.33}_{-0.175}\\times10^{-4}$ & $2.19^{+2.29}_{-0.244}\\times10^{-4}$ & $1.23^{+0.989}_{-0.107}\\times10^{-4}$ \\\\ \\hline\n\\end{tabular}\n\\label{tab:Table_5}\n\\end{table*}\n\n\\begin{table*}\n\\renewcommand\\thetable{6B}\n\\caption{The EBL values from Table \\ref{tab:Table_5} converted to a brightness temperature in mK, the units used in \\citet{Vernstrom_2011}.}\n\\begin{center}\n\n\\begin{tabular}{cccccc}\n\\hline\nFrequency (MHz) & SHARK Model EBL & AGN-Fit EBL & Total Sum & Original Lower Limit \\\\ \\hline\n150 & $2.91^{+0.01}_{-0.01}\\times10^{4}$ & $2.58^{+0.02}_{-0.03}\\times10^{4}$ & $5.51^{+0.06}_{-0.06}\\times10^{4}$ & $3.42^{+0.01}_{-0.01}\\times10^{4}$ \\\\ \n325 & $3.06^{+0.09}_{-0.09}\\times10^{3}$ & $3.36^{+0.09}_{-0.09}\\times10^{3}$ & $6.39^{+0.12}_{-0.13}\\times10^{3}$ & $4.10^{+0.06}_{-0.05}\\times10^{3}$ \\\\\n610 & $5.78^{+0.13}_{-0.11}\\times10^{2}$ & $4.27^{+0.06}_{-0.06}\\times10^{2}$ & $1.01^{+0.01}_{-0.01}\\times10^{3}$ & $6.34^{+0.09}_{-0.09}\\times10^{2}$ \\\\\n1400 & $63.9^{+1.54}_{-1.42}$ & $63.7^{+0.83}_{-0.83}$ & $1.28^{+0.01}_{-0.01}\\times10^{2}$ & $91.8^{+0.04}_{-0.04}$ \\\\ \n3000 & $8.52^{+0.20}_{-0.19}$ & $7.85^{+0.16}_{-0.81}$ & $16.4^{+0.24}_{-0.24}$ & $15.3^{+0.24}_{-0.24}$ \\\\ \n5000 & $2.14^{+0.11}_{-0.11}$ & $2.73^{+0.42}_{-0.36}$ & $4.84^{+0.11}_{-0.11}$ & $2.71^{+0.08}_{-0.08}$ \\\\ \n8400 & $0.610^{+0.04}_{-0.04}$ & $0.566^{+1.28}_{-0.10}$ & $1.20^{+1.26}_{-0.13}$ & $0.675^{+0.02}_{-0.03}$ \\\\ \\hline\n\\end{tabular}\n\n\\end{center}\n\n\\end{table*}\n\n\n\n\\subsection{Comparison to previous source count based EBL measurements}\nWe compare our results to previous discrete radio EBL measurements such as \\citet{Vernstrom_2011}, \\citet{Gervasi_2008}, and \n\\citet{Hardcastle_2021}. Our measurements and lower limits lie within the associated errors of \\citet{Vernstrom_2011}, with the exception of 150\\,MHz. The availability of data from the decade since their work has allowed us to move the lower limits of the radio EBL upwards significantly at all frequencies other than 1.4\\,GHz\ndue to the increased availability of deep survey data at these less studied frequencies. For example, previous surveys at 150\\,MHz had only achieved a depth of $\\approx 106$\\,mJy, while data published in this decade now reaches a depth of 220\\,$\\mu$Jy. A different behavior is seen at 325\\,MHz where the newer data sets achieve roughly the same depth but are systematically above the data used in \\citet{Vernstrom_2011}. From the results obtained in Section 5.1 and shown in Figure \\ref{fig:fig5}, we expect the radio EBL to gradually increase with frequency. Where this is not shown by the data is likely due to the data not reaching faint enough fluxes to become convergent or have a well-defined lower limit, and future surveys reaching fainter fluxes at all frequencies with better statistics should address this inconsistency. At 150 MHz synchrotron self-absorption can become important in shaping the counts of compact sources with a high free electron density. See \\citet{Mancuso_2017} for a detailed discussion on how synchrotron self-absorption affects the number counts at 150 MHz.\n\n\\subsection{Comparison to direct EBL measurements}\nAt 3\\,GHz, we still do not measure enough flux with convergent source counts to achieve consistency with the ARCADE-2 measurements \\citep{Fixsen_2011}. The same is true for our lower limits and model-derived measurements, which also lie below the ARCADE-2 measurement. At 3\\,GHz where a convergent measurement is possible, the corresponding ARCADE-2 measurement lies a factor of $\\approx 4$ above the discrete measurement obtained from source counts. \n\nWith convergent models such as ones presented here and with ever deeper counts the possibility that the whole of the ARCADE-2 temperature could come from galaxies alone is increasingly unlikely. There are still models for counts below $1\\, \\mu$Jy, such as those presented in \\citet{Hopkins_2000}, \\citet{Condon_2012, Vernstrom_2014} that would be enough to account for the difference. However, there would be constraints on the clustering of such sources from radio experiments measuring the angular power spectrum \\citep[e.g][]{Holder_2014, Offringa_2022}, which imply either faint sources that are very clustered or sources of high redshift.\n\nMany other possibilities have been looked at for what could make up the difference between the source counts and ARCADE-2 \\citep[see][ for an overview]{Singal_2018}. One possibility is diffuse radio emission associated with large-scale structure, or the clusters and the cosmic web. From statistical studies it appears that this sort of diffuse low-level emission may account for a few mK at $1.4\\,$GHz \\citep{Vernstrom_2014_B,Vernstrom_2017,Vernstrom_2021}. \\citet{Krause_2021} examined the local bubble as a possible contributor to the radio background but found at most this would be at the percent level. There has also been a good deal of work looking at the possibility of synchrotron emission from dark matter decay and or annihilation \\citep[e.g.][]{Fornengo_2011,Fornengo_2014}, as well as other more exotic explanations \\citep[such as dense nuggets of quarks][]{Lawson_2013}. There is also the possibility of a Galactic origin. Certain models that include a spherical halo component for the Galaxy claim to account for any excess \\citep{Subrahmanyan_2013}, though there are possible problems with such models \\citep[see e.g.][]{Singal_2015}. \n\nIt seems unlikely for the discrepancy to be caused by just one source. At this point it seems more likely that many factors may contribute such as faint clustered point sources, low-level emission from the cosmic web, the local bubble, possibly dark matter annihilation or fast radio bursts, and possible reasons to decrease the ARCADE-2 estimate like revised Galactic modelling. Some of these contributors are easier to investigate than others, but new estimates for many will be coming in the next few years. Radio surveys at a variety of frequencies from the likes of MeerKAT, LOFAR, ASKAP and others will continue to push the source counts and also investigate the clustering properties of faint sources. With these surveys as well better estimates of the contributions from diffuse emission (aided by stacking and other statistical estimates), and continuing to put new constraints on the dark matter models \\citep[e.g.][]{Regis_2021}. New deep polarisation surveys like POSSUM will provide many new probes of the Galaxy's magnetic field, particularly in the halo region. However, one important future measurement would be a new measure of the radio sky from experiments with absolute zero-level calibration. \n\n\\section{Predicting the SFG radio EBL from the cosmic star-formation history}\nIn this section we attempt to predict the radio continuum flux from the SFG population. We start with the Cosmic Star-Formation History (e.g., \\citealt{Madau_2014}; \\citealt{Driver_2018}) and what we know about the radio-star-formation correlation (\\citealt{van_der_Kruit_1971}; \\citealt{Condon_1992}; \\citealt{Davies_2017}; \\citealt{Delvecchio_2021}; \\citealt{Seymour_2009}). Star-formation results in strong UV fluxes that ionise the surrounding gas. As massive (M$>8$M$_{\\odot}$) stars reach their endpoints they generate significant quantities of charged particles through supernova shocks which interact in the Galactic magnetic field. In addition there is a free-free component that arises from the interaction of less relativistic particles with the Inter Stellar Medium (ISM). Hence there is a direct connection between the supernova rate and radio flux. This leads to a well known correlation between star-formation as evidenced by UV fluxes and optical emission lines, far-IR radiation from heated dust, and radio emission from synchotron radiation around massive stars and SN plus general free-free emission from the interstellar medium (ISM). Critical physical parameters are the temperature of the ISM and the electron density. \n\nThese optical-FIR-radio correlation arising from star-formation has been known for some time and over the past decade enormous amounts of work have been invested in constructing the cosmic star-formation history (CSFH) from the era of recombination to the present day. Here we adopt the EBL model described in \\citep{Koushan_2021} in which the spectral energy distribution code {\\sc ProSpect} (\\citealt{Robotham_2020}), uses a CSFH to provide a prediction of the UV to radio EBL. The detailed modelling of the radio continuum aspect follows closely that described in \\citep{Marvil_2015} and is essentially built in to the {\\sc ProSpect} code (see Thorne et al, submitted).\n\nFigure\\,\\ref{fig:fig6} shows the recent compendium of EBL data from a range of optical to infrared surveys compiled over recent years and described in \\citet{Driver_2016} and \\citet{Koushan_2021}. These compendiums are based on galaxy number-counts (analogous to radio source counts) where in most cases the contribution to the EBL is clearly bounded and the measurements and errors known to an accuracy of 10-20\\%. Also shown on Figure\\,\\ref{fig:fig6} (as cyan data points) are the Planck data from \\citep{Odegard_2019} covering the mm regime. Our new data is added at the radio wavelengths as blue open squares showing the measured EBL contribution from the SFG. For completeness we also show the lower limits and measurements of the {\\it combined} SF and AGN EBL. The ARCADE2 measurements are also shown.\n\nThe model curve (solid purple line) shows our prediction for the \\citet{Madau_2014} compendium of the cosmic star-formation history using the default options for {\\sc ProSpect}. The model fits the UV to far-IR data extremely well, as noted in \\citet{Koushan_2021}, and very slightly over-predicts the \\citet{Odegard_2019} data, it also under-predicts the SFG radio EBL quite significantly.\n\nThe {\\sc ProSpect} model uses the \\citet{Marvil_2015} templates for dust and radio emission which ultimately originate back to the model laid down by \\citet{Condon_1992}. Within the {\\sc ProSpect} code we have a number of options. The two most relevant are to increase the temperature of the ionised gas, $T_e$, or to modify the fraction of emission which comes from free-free emission rather than synchrotron, {\\sc ff\\_frac}. In the default model these are set at $T_e=10,000$K (the value proposed by \\citet{Condon_1992} based on constraints from the Milky Way. A much better fit can be found by either increasing $T_e$ to $4.5 \\times 10^4$K (dashed purple line), or decreasing {\\sc ff\\_frac} to 0.05 (dotted purple line). We note that \\citet{Condon_1990} advocates for the {\\sc ff\\_frac} to remain in the bounds of $0.05-0.2$. Both modifications to the {\\sc ProSpect}-model provide satisfactory fits to the data and although the exact predictions are slightly different the errors at this stage are too large to distinguish between them. Hence either modification, or some combination thereof, can readily match the data.\n\n In coming years significant mprovement in the measurement and modelling of the EBL are to be expected as deeper source counts will become available with the advent of the Square Kilometer Array as well as improved understanding of total spectral energy distribution as we assemble large samples of UV-radio SEDs for the normal galaxy population. In due course we might expect the errors in the radio-EBL to become reduced to a few percent at which point it should become possible to extract meaningful constraints on the model parameters.\n\n Finally, it is worth noting that significant gap between the ARCADE2 data (black dots) and our results on Fig.~\\ref{fig:fig6}. While an upward adjustment of the model is possible by increasing the ISM Temperature or free-free fraction further the parameters would need to move out of the recommended bounds to very extreme and physically unlikely values. Adding an additional as yet unknown hidden populations also becomes problematic as such a population could not be regular SFG without very quickly violating the constraints at optical\/infrared wavelengths.\n\n\\begin{figure*}\n\\includegraphics[width=\\textwidth]{Prospect_model.png}\n\\caption{The EBL from optical to radio wavelengths showing our new radio EBL empirical lower limits (gold arrows), our extrapolated measurements (green), and the component arising from SFG only (blue squares). Also show is the EBL model predicted using the {\\sc ProSpect} code, where the radio emission of the SFG population is predicted entirely from the Cosmic Star-Formation History \\citep{Madau_2014}. The initial (default) {\\sc ProSpect} model (purple solid line; see \\citealt{Robotham_2020}) under-predicts the SFG emission at radio wavelengths. To rectify this one can either increase the ISM temperature (from $10^5$K to $4.5\\times10^5$K; dashed purple line), or by changing the balance between the synchrotron and free-free emission (dotted purple line). \n\\label{fig:fig6}}\n\\end{figure*}\n\n\\begin{figure*}\n\\vspace*{0.000cm}\n\\includegraphics[width=\\textwidth]{model_ebl.png}\n\\caption{The complete EBL over the entire measured EM range, including our discrete radio source count measurements along with those from the literature (as indicated in the Legend). Also shown is the CMB contribution which dominates at most radio wavelengths and the constraints on the total non-CMB radio EBL from ARCADE2 (black dotted line and open circles). Our model-derived (extrapolated) data points in the radio region are shown as green squares (AGN and SFG), and blue circles (SFG). The solid purple line is the best fitting (dotted) SFG only prediction from Figure \\ref{fig:fig6}.}\n\\label{fig:figEBL}\n\\end{figure*}\n\n\\section{Summary} \nIn this paper, we measure the discrete radio EBL across 7 frequencies using source count data across nearly 50 years of study and 8 orders of magnitude in flux data. We assembled our data into a massive compendium which we use to establish the counts at each frequency using the spectral index versus flux relationship detailed in \\citep[][and references therein]{Windhorst_2003}. We use the data to identify problematic data sets which sit apart from the rest, and to identify individual data points which display incompleteness or non-Euclidean behavior, and establish an anchor point to bring the radio EBL to convergence at the bright end in excess of $\\approx 10^4 mJy$. Finally, in preparing the data, we add a conservative error floor estimate for each frequency to address systematic errors in the absolute flux density scale seen between different surveys. We integrate under a series of spline fits to establish lower limits and a rough estimate for the AGN contribution by integrating to the crossover point seen in the radio EBL distribution between the AGN and star-forming galaxy dominated populations. At 3\\,GHz, data reaches faint enough fluxes to be convergent at both ends, and we can establish a model-independent convergent measurement of the radio EBL and a separation of its components. To validate our estimates and allow for a convergent measurement at other frequencies, we introduce the SHARK model source counts consisting of only star-forming galaxies, which we use to extrapolate the radio EBL at the faint end of all frequencies and robustly break it down into its respective components dominated by AGN and star-forming galaxies. We compare the model-derived measurements to the data-derived estimates at all frequencies, with special attention at 3\\,GHz, where because the data is convergent, we can compare the model source counts to the data, and find a $\\approx7\\%$ difference between the two. Finally, we establish our model-derived measurements and empirical measurement at 3\\,GHz in context with previous studies of the discrete radio EBL such as \\citet{Vernstrom_2011} and the EBL as a whole in Figure \\ref{fig:figEBL}. We still do not find enough flux in faint resolved sources at 3\\,GHz to account for the total flux received by the direct measurement experiment of ARCADE-2 \\citep{Fixsen_2011} and do not have a good explanation as to why this is the case.\n\n\\section*{Acknowledgement}\n\nWe would like to thank the ARC Future Fellowship (FT200100375) on behalf of the University of Western Australia (UWA), and the NASA JWST Interdisciplinary Scientist grants NAG5-12460, NNX14AN10G and 80NSSC18K0200 from GSFC on behalf of Rogier Windhorst and Arizona State University (ASU), both of which provided funding for student\nwork on this project.\n\n\nAuthor Claudia Lagos has received funding from the ARC Centre of \nExcellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013.\n\n\nWe also are grateful to Yjan Gordon from the University of Wisconsin, Madison for providing the tabulated early VLA Sky Survey source count data shown in \\citep{Gordon_2021}. This work was supported by resources provided by The Pawsey Supercomputing Centre with funding from the\nAustralian Government and the Government of Western Australia.\n\n\n\n\\section*{Data Availability}\nAll data has been compiled from the literature as indicated in the Table \\ref{tab:inputdata} and Table \\ref{tab:Table_5}, and is also made available through a machine readable table available from the MNRAS site.\n\n\\bibliographystyle{mnras}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzhgcv b/data_all_eng_slimpj/shuffled/split2/finalzzhgcv new file mode 100644 index 0000000000000000000000000000000000000000..eba9c1842d825548f720e18253d0514c7f59094e --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzhgcv @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nSickle cell disease (SCD) affects nearly 100,000 people in the US\\footnote{\\url{https:\/\/www.hematology.org\/Patients\/Anemia\/Sickle-Cell.aspx}} and is an inherited red blood cell disorder. Common complications of SCD include acute pain, organ failure, and early death \\cite{mohammed2019machine}. Acute pain arises in patients when blood vessels are obstructed by sickle-shaped red blood cells mitigating the flow of oxygen, a phenomenon called vaso-occlusive crisis. Further, pain is the leading cause of hospitalizations and emergency department admissions for patients with SCD. The numerous health care visits lead to a massive amount of electronic health record (EHR) data, which can be leveraged to investigate the relationships between SCD and pain. Since SCD is associated with several complications, it is important to identify clinical notes with signs of pain from those without pain. It is equally important to gauge changes in pain for proper treatment.\n\nDue to their noisy nature, analyzing clinical notes is a challenging task. In this study, we propose techniques employing natural language processing, text mining and machine learning to predict \\textbf{pain relevance} and \\textbf{pain change} from SCD clinical notes. We build two kinds of models: 1) A binary classification model for classifying clinical notes into \\textit{pain relevant} or \\textit{pain irrelevant}; and 2) A multiclass classification model for classifying the \\textit{pain relevant} clinical notes into i) \\textit{pain increase}, ii) \\textit{pain uncertain}, iii) \\textit{pain unchanged}, and iv) \\textit{pain decrease}. We experiment with Logistic Regression, Decision Trees, Random Forest, and Feed Forward Neural Network (FFNN) for both the binary and multiclass classification tasks. For the multiclass classification task, we conduct ordinal classification as the task is to predict pain change levels ranging from \\textit{pain increase} to \\textit{pain decrease}. We evaluate the performance of our ordinal classification model using graded evaluation metrics proposed in \\cite{gaur2019knowledge}.\n\n\\section{Related Work}\n\nThere is an increasing body of work assessing complications within SCD. Mohammed et al. \\cite{mohammed2019machine} developed an ML model to predict early onset organ failure using physiological data of patients with SCD. They used five physiologic markers as features to build a model using a random forest classifier, achieving the best mean accuracy in predicting organ failure within six hours before the incident. Jonassaint et al. \\cite{jonassaint2015usability} developed a mobile app to monitor signals such as clinical symptoms, pain intensity, location and perceived severity to actively monitor pain in patients with SCD. Yang et al. \\cite{yang2018improving} employed ML techniques to predict pain from objective vital signs shedding light on how objective measures could be used for predicting pain.\n\nPast work on predicting pain or other comorbidities of SCD, has thus, relied on features such as physiological data to assess pain for a patient with SCD. In this study, we employ purely textual data to assess the prevalence of pain in patients and whether pain increases, decreases or stays constant. \n\nThere have been studies on clinical text analysis for other classification tasks. Wang et al. \\cite{wang2019clinical} conducted smoking status and proximal femur fracture classification using the i2b2 2006 dataset. Chodey et al. \\cite{chodey2016clinical} used ML techniques for named entity recognition and normalization tasks. Elhadad et al. \\cite{elhadad2015semeval} conducted clinical disorder identification using named entity recognition and template slot filling from the ShARe corpus (Pradhan et al., 2015) \\cite{pradhan2014evaluating}. Similarly, clinical text can be used for predicting the prevalence and degree of pain in sickle cell patients as it has a rich set of indicators for pain.\n\n\n\\section{Data Collection}\n\nOur dataset consists of 424 clinical notes of 40 patients collected by Duke University Medical Center over two years (2017 - 2019). The clinical notes are jointly annotated by two co-author domain experts. There are two rounds of annotation conducted on the dataset. In the first round, the clinical notes were annotated as \\textit{relevant to pain} or \\textit{irrelevant to pain}. In the second round, the \\textit{relevant to pain} clinical notes were annotated to reflect \\textbf{pain change}. Figure-1 shows the size of our dataset based on \\textbf{pain relevance} and \\textbf{pain change}. As shown, our dataset is mainly composed of \\textit{pain relevant} clinical notes. Among the \\textit{pain relevant} clinical notes, clinical notes labeled \\textit{pain decrease} for the \\textbf{pain change} class outnumber the rest. Sample \\textit{pain relevant} and \\textit{pain irrelevant} notes are shown in Table-I.\\\n\n\n\n\n\n\\begin{table}[htbp]\n\n\\caption{Sample Clinical notes}\n\\begin{center}\n\n\\begin{tabular}{p{2.0cm} | p{5.5cm} }\n\n\\hline\n\\cline{2-2} \n\n\\textbf{\\textbf{Pain Relevance}}& \\textbf{\\textbf{Sample Clinical Note}} \\\\\n\n\\hline\nYES & Patient pain increased from 8\/10 to 9\/10 in chest. \\\\\n\n\\hline\nNO & Discharge home\n\\vspace*{-\\baselineskip}\n\n\n\n\n\n\\end{tabular}\n\\label{tab1}\n\\end{center}\n\\vspace{-4mm}\n\\end{table}\n\n\n\n\nOur dataset is highly imbalanced, particularly, among the \\textbf{pain relevance} classes. There are significantly higher instances of clinical notes labeled \\textit{pain relevant} than \\textit{pain irrelevant}. To address this imbalance in our dataset, we employed a technique called Synthetic Minority Over-sampling TEchnique (SMOTE) \\cite{chawla2002smote} for both classification tasks.\n\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=70mm]{pain_relevance_change.png}}\n\n\\caption{Statistics of dataset for \\textbf{Pain Relevance} and \\textbf{Pain Change} classes}\n\\label{fig}\n\\end{figure}\n\n\n\nWe preprocessed our dataset by removing stop words as well as punctuations, and performed lemmatization.\n\n\\section{Methods}\n\nThe clinical notes are labeled by co-author domain experts based on their \\textbf{pain relevance} and \\textbf{pain change} indicators. The \\textbf{pain change} labels use a scale akin to the Likert scale from severe to mild. Our pipeline (Figure-2) consists of data collection, data preprocessing, linguistic\/topical analysis, feature extraction, feature selection, model creation, and evaluation. We use linguistic and topical features to build our models. While linguistic analysis is used to extract salient features, topical features are used to mine latent features. We performed two sets of experiments: 1) Binary Classification for \\textbf{pain relevance} classification, and 2) Multiclass Classification for \\textbf{pain change} classification.\n\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=75mm]{pipeline.png}}\n\n\\caption{Sickle Cell Disease Pain Classification Pipeline}\n\\label{fig}\n\\end{figure}\n\n\n\n\\subsection{Linguistic Analysis}\n\nTo infer salient features in our dataset, we performed linguistic analysis. We generated n-grams for \\textit{pain-relevant} and \\textit{pain-irrelevant} clinical notes and clinical notes labeled \\textit{pain increase}, \\textit{pain uncertain}, \\textit{pain unchanged}, or \\textit{pain decrease}. In our n-grams analysis, we observe there are unigrams and bigrams that are common to different classes (e.g., common to \\textit{pain relevant} and \\textit{pain irrelevant}). Similarly, there are unigrams and bigrams that are exclusive to a given class. Table-II shows the top 10 unigrams selected using \\( \\chi^2 \\) feature selection for our dataset based on the classes of interest.\n\n\n\n\\begin{table}[htbp]\n\n\\caption{Top 10 Unigrams}\n\\begin{center}\n\n\\begin{tabular}{p{2.15cm} |p{2cm}| p{2cm} }\n\n\\hline\n\\cline{2-3} \n\n\\textbf{\\textbf{Pain Relevant\n(Exclusive)}}& \\textbf{\\textbf{Pain Irrelevant\n(Exclusive)}}& \\textbf{\\textit{Pain Relevant AND Pain Irrelevant}} \\\\\n\n\n\n\\hline\nemar, intervention, increase, dose, expressions, chest, regimen, alteration, toradol, medication & home, wheelchair, chc, fatigue, bedside, parent, discharge, warm, relief, mother & pain, pca, plan, develop, control, altered, patient, level, comfort, manage \n\n\n\n\n\n\\end{tabular}\n\\label{tab1}\n\\end{center}\n\\vspace{-4mm}\n\\end{table}\n\n\\subsection{Topical Analysis}\nWhile n-grams analysis uncovers explicit language features in the clinical notes, it is equally important to uncover the hidden features characterizing the topical distribution. We adopt the Latent Dirichlet Allocation (LDA) \\cite{blei2003latent} for unraveling these latent features. We train an LDA model using our entire corpus. \n\nTo determine the optimal number of topics for a given class of clinical notes (e.g., \\textit{pain relevant} notes), we computed coherence scores \\cite{stevens2012exploring}. The higher the coherence score for a given number of topics, the more intepretable the topics are (see Figure-3). We set the number of words characterizing a given topic to eight. These are words with the highest scores in the topic distribution. We found the human-interpretable optimal number of topics for each of the classes of the clinical notes in our dataset to be two. This is interpreted as each class of the clinical notes is a mixture of two topics. Table-III shows words for the two topics for \\textit{pain relevant} and \\textit{pain irrelevant} clinical notes. As can be seen in the table, \\textit{pain relevant} notes can be interpreted to have mainly the topic of pain control, while \\textit{pain irrelevant} notes to have primarily the topic of home care. Similarly, Table-IV shows the distribution of words for the topics for each of the \\textbf{pain change} classes (underscored words are exclusive to the corresponding class for Topic-1). Further, \\textit{pain} appears in each of the topics for \\textbf{pain change} classes and, as a result, is not discriminative. While a common word such as \\textit{pain} in the topic distribution can be considered as a stop word and not helpful for \\textbf{pain change} classification, we did not remove it since \\textit{pain} helps with interpretation of a given topic regardless of other topics.\n\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[width=46mm, height=2.5cm]{coherence_final.png}}\n\n\\caption{Coherence Scores vs Number of Topics }\n\\label{fig}\n\n\\vspace{-1.5em}\n\\end{figure}\n\n\n\\begin{table}[htbp]\n\\vspace{1 mm}\n\\caption{Topic distribution based on pain relevance}\n\\begin{center}\n\\vspace{-4 mm}\n\n\n\\begin{tabular}{p{1.25cm} | p{3.2cm} | p{3.2cm}}\n\n\\hline\n\\cline{2-3} \n\n\n\\textbf{\\textbf{Pain Relevance}}& \\textbf{Most Prevalent Words in Topic-1} & \\textbf{Most Prevalent Words in Topic-2} \\\\\n\n\\hline\nYES & progress, pain, improve, decrease, knowledge, contro\n& patient, pain, medication, knowledge, goal, stat\n\\\\\n\n\\hline\nNO & note, admission, discharge, patient, home, abilit\n& pain, goal, admission, outcome, relief, continue\n\n\n\\vspace*{-\\baselineskip}\n\n\n\\end{tabular}\n\\label{tab1}\n\\end{center}\n\\vspace{-4mm}\n\\end{table}\n\n\n\n\n\\begin{table}[htbp]\n\n\\caption{Topic distribution based on pain change}\n\\begin{center}\n\n\\vspace{-4 mm}\n\n\\begin{tabular}{p{1.25cm} | p{3.2cm} | p{3.2cm}}\n\n\\hline\n\\cline{2-2} \n\n\\textbf{\\textbf{Pain Change}}& \\textbf{\\textbf{Most Prevalent Words in Topic-1}} & \\textbf{\\textbf{Most Prevalent Words in Topic-2}} \\\\\n\n\\hline\nPain increase & pain, progress, \\textbf{\\underline{medication}}, \\textbf{\\underline{management}}, patient, \\textbf{\\underline{schedule}}, pca,\\textbf{\\underline{ intervention}} & pain, patient, give, goal, intervention, dose, button, plan\\\\\n\n\\hline\nPain uncertain & pain, patient, \\textbf{\\underline{goal}}, \\textbf{\\underline{continue}}, plan, \\textbf{\\underline{improve}}, decrease, develop & outcome, pain, problem, knowledge, regimen, deficit, carry, method\\\\\n\n\\hline\nPain unchanged & pain, progress, \\textbf{\\underline{level}}, \\textbf{\\underline{control}}, develop, plan, regimen, pca & patient, pain, remain, well, demand, plan, level, manage\\\\\n\n\\hline\nPain decrease & pain, progress, patient, decrease, plan, regimen, \\textbf{\\underline{satisfy}}, \\textbf{\\underline{alter}} & pain, patient, improve, satisfy, control, decrease, manage, ability\\\\\n\n\\vspace*{-\\baselineskip}\n\n\n\n$^{\\mathrm{}}$& & \\\\\n\n\\hline\n\n\\end{tabular}\n\\label{tab1}\n\\end{center}\n\\vspace{-4mm}\n\\end{table}\n\n\n\n\n\n\\subsection{Classification}\n\nThe language and topical analyses results are used as features in building the ML models. \nOur classification task consists of two sub-classification tasks: 1) \\textbf{pain relevance} classification; 2) \\textbf{pain change} classification, each with its own sets of features. The \\textbf{pain relevance} classifier classifies clinical notes into \\textit{pain-relevant} and \\textit{pain-irrelevant}. The \\textbf{pain change} classifier is used to classify the \\textit{pain-relevant} clinical notes into 1) \\textit{pain increase}, 2) \\textit{pain uncertain}, 3) \\textit{pain unchanged}, and 4) \\textit{pain decrease}. We trained and evaluated various ML models for each classification task. We used a combination of different linguistic and topical features to train our models. Since linguistic and topical features are generated using independent underlying techniques, which make them orthogonal, concatenation operation is used to combine their representations. We split our dataset into 80\\% training and 20\\% testing sets and built logistic regression, decision trees, random forests, and FFNN for both classification tasks. Table-V shows the results of the \\textbf{pain relevance} classifier while Table-VI shows \\textbf{pain change} classification results. For the ordinal classification, we considered the following order in the severity of pain change from high to low: \\textit{pain increase}, \\textit{pain uncertain}, \\textit{pain unchanged}, \\textit{pain decrease}.\n\n\\begin{comment}\n\n\n\\begin{table}\n\\centering\n\\caption{Pain Relevance Classification}\n\\label{res:expdata}\n\\scalebox{0.80}{\n\\vline\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multirow{Model} & \\multirow{Features} & \\multirow{Precision} & \\multirow{Recall} & \\multirow{F-measure}\\\\\n\\cline{1-5}\n\n\\end{tabular}\n}\n\n\\end{table}\n\n\\end{comment}\n\n\\begin{table}[ht]\n \\caption{Pain Relevance Classification}\n \\centering\n \\scalebox{0.85}{\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n \\textbf{Model} & \\textbf{Feature} & \\textbf{Precision} & \\textbf{Recall} & \\textbf{F-measure}\\\\\n \\hline\n \\multirow{3}{*} {Logistic Regression} & Linguistic & 0.94 & 0.93 & 0.94\\\\\n & Topical & 0.98 & 0.86 & 0.91\\\\\n & Linguistic + Topical & 0.95 & 0.95 & 0.95\\\\\n \\hline\n \\multirow{3}{*} \\textbf{Decision Trees} & Linguistic & 0.95 & 0.95 & 0.95\\\\\n & Topical & 0.98 & 0.98 & 0.98\\\\\n & \\textbf{Linguistic + Topical} & \\textbf{0.98} & \\textbf{0.98} & \\textbf{0.98}\\\\\n \n \\hline\n \n \n \\multirow{3}{*} {Random Forest} & Linguistic & 0.90 & 0.95 & 0.92\\\\\n & Topical & 0.95 & 0.98 & 0.98\\\\\n & Linguistic + Topical & 0.90 & 0.95 & 0.93\\\\\n \n \n \\hline\n \\multirow{3}{*} {FFNN} & Linguistic & 0.94 & 0.94 & 0.94\\\\\n & Topical & 0.98 & 0.98 & 0.98\\\\\n & Linguistic + Topical & 0.96 & 0.96 & 0.94\\\\\n \\hline\n \\end{tabular}\n }\n \n \\label{multirow_table}\n\\end {table}\n\n\n\n\n\n\\begin{table}[ht]\n \\caption{Pain Change Classification}\n \\centering\n \\scalebox{0.85}{\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n \\textbf{Model} & \\textbf{Feature} & \\textbf{Precision} & \\textbf{Recall} & \\textbf{F-measure}\\\\\n \\hline\n \\multirow{3}{*} {Logistic Regression} & Linguistic & 0.75 & 0.56 & 0.63\\\\\n & Topical & 0.50 & 0.55 & 0.52\\\\\n & Linguistic + Topical & 0.76 & 0.58 & 0.66\\\\\n \\hline\n \\multirow{3}{*} \\textbf{Decision Trees} & Linguistic & 0.76 & 0.59 & 0.67\\\\\n & Topical & 0.73 & 0.65 & 0.68\\\\\n & \\textbf{Linguistic + Topical} & \\textbf{0.74} & \\textbf{0.68} & \\textbf{0.70}\\\\\n \n \\hline\n \n \\hline\n \\multirow{3}{*} {Random Forest} & Linguistic & 0.74 & 0.49 & 0.59\\\\\n & Topical & 0.94 & 0.52 & 0.66\\\\\n & Linguistic + Topical & 0.81 & 0.46 & 0.59\\\\\n \n \n \\hline\n \\multirow{3}{*} {FFNN} & Linguistic & 0.71 & 0.59 & 0.65\\\\\n & Topical & 0.73 & 0.65 & 0.68\\\\\n & Linguistic + Topical & 0.83 & 0.51 & 0.63\\\\\n \\hline\n \\end{tabular}}\n \n \\label{multirow_table}\n\n\\end {table}\n \n\\section{Discussion}\n\nFor \\textbf{pain relevance} classification, the four models have similar performance. For \\textbf{pain change} classification, however, we see a significant difference in performance across the various combinations of features and models. Decision trees with linguistic and topical features achieve the best performance in F-measure. While random forest, and FFNN offer better precision, each, than decision tree, they suffer on Recall, and therefore on F-measure. Further, most models perform better when trained on topical features than pure linguistic features. A combination of topical and linguistic features usually offers the best model performance. Thus, latent features obtained using LDA enable an ML model to perform better. \n\nEvaluation of the multiclass classification task is conducted using the techniques used by Gaur et al. \\cite{gaur2019knowledge} where a model is penalized based on how much it deviates from the true label for an instance. Formally, the count of true positives is incremented when the true label and predicted label of an instance are the same. Similarly, false positives' count gets incremented by an amount equal to the gap between a predicted label and true label (when predicted label is higher than true label). False negatives' count is incremented by the difference between the predicted label and true label (when predicted label is lower than true label). Precision, and recall are then computed following the implementations defined in ML libraries\\footnote{\\url{https:\/\/bit.ly\/3a5Fibb}} using the count of true positives, false positives, and false negatives. Finally, F-measure is defined as the harmonic mean of precision and recall.\n\nWhile we achieved scores on the order of 0.9 for \\textbf{pain relevance} classification, the best we achieved for \\textbf{pain change} classification was 0.7. This is because there is more disparity in linguistic and topical features between \\textit{pain relevant} and \\textit{pain irrelevant} notes than there is among the four \\textbf{pain change} classes. Since the price of false negatives is higher than false positives in a clinical setting, we favor decision trees with n-grams and topics used as features as they achieve the best Recall and F-measure, albeit they lose to other models on Precision. Thus, identification of \\textit{pain relevant} notes with 0.98 F-measure followed by a 0.70 F-measure on determining \\textit{pain change} is impressive. We believe our model can be used by MPs for SCD-induced pain mitigation.\n\n\\vspace{8mm}\n\n\\section{Conclusion and Future Work}\n\nIn this study, we conducted a series of analyses and experiments to leverage the power of natural language processing and ML to predict \\textbf{pain relevance} and \\textbf{pain change} from clinical text. Specifically, we used a combination of linguistic and topical features to build different models and compared their performance. Results show decision tree followed by feed forward neural network as the most promising models.\n\nIn future work, we plan to collect additional clinical notes and use unsupervised, and deep learning techniques for predicting pain. Further, we look forward to fusing different modalities of sickle cell data for better modeling of pain or different physiological manifestations of SCD.\n\n\n\n\n\n\n\n\n\n\\vspace{12pt}\n\n\n\n\n\\bibliographystyle{IEEEtran} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Appendix}\n\n\nTable~\\ref{tab:hyperparameters} lists the training hyperparameters and\nruntime hyperparameters used by \\texttt{PenumbraMixture}.\nTable~\\ref{tab:cross-eval-top-1}, Table~\\ref{tab:cross-eval-top-5},\nand Table~\\ref{tab:cross-eval-winner-acc}\nprovide top-1 action, top-5 action, and winner accuracies, respectively,\nbetween each headset in the neural network.\nFigure~\\ref{fig:game-length-distribution}\nshows game length distributions for each headset.\n\nThe synopsis features were hand-designed.\nMany of them are natural given the rules of chess.\nSome of them are near duplicates of each other.\nTable~\\ref{tab:feature-description-1} and Table~\\ref{tab:feature-description-2}\njointly provide brief descriptions of each synopsis feature plane.\nThese tables also include\nsaliency estimates averaged over five runs.\nThe penultimate column orders the synopsis features by\ntheir per-bit saliency based on action gradients, and\nthe final column reports the average difference\nof the policy head accuracies\nwhen the model was retrained without each feature.\n\n\\begin{table}[h]\n\\caption{Hyperparameters used by \\texttt{PenumbraMixture}}\n\\label{tab:hyperparameters}\n\\vskip 0.15in\n\\begin{center}\n\\begin{small}\n\\begin{tabular}{llr}\n Symbol & Parameter & Value \\\\\n \\hline\n \n $b$ & Batch size & $256$ \\\\\n $c$ & Exploration constant & $2$ \\\\\n $d_{\\text{sense}}$ & Search depth for sense actions & $6$ \\\\\n $d_{\\text{move}}$ & Search depth for move actions & $12$ \\\\\n $F$ & \\# of binary synopsis features & $8\\stimes8\\stimes104$ \\\\\n $k$ & Rejection sampling persistence & $512$ \\\\\n $\\ell$ & Limited state set size & $128$ \\\\\n $m$ & Bandit mixing constant & $1$ \\\\\n $n_{\\text{particles}}$ & \\# of samples to track & $4096$ \\\\\n $n_{\\text{vl}}$ & Virtual loss & $1$ \\\\\n $n_{\\text{batches}}$ & Total minibatches of training & $650000$ \\\\\n $n_{\\text{width}}$ & Network width; \\# features per layer & $128$ \\\\\n $n_{\\text{depth}}$ & Network depth; \\# residual blocks & $10$ \\\\\n $z$ & Depth increase threshold & $\\infty$ \\\\\n $\\kappa$ & Caution & $0$ \\\\\n $\\phi$ & Paranoia & $0$ \\\\\n $\\epsilon$ & Learning rate & $0.0005$ \\\\\n\\end{tabular}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\n\n\\subsection{2019 NeurIPS competition}\n\n\\texttt{Penumbra}{} was originally created to compete in the\n2019 reconnaissance blind chess competition\nhosted by the Conference on Neural Information Processing Systems (NeurIPS).\nHowever, it performed very poorly in that competition,\nwinning fewer games than the random bot.\n\nThe program and underlying algorithm presented in this paper\nare largely the same as the originals.\nThe main differences are that some\nhyperparameters were adjusted,\nthe neural network was retrained with more data,\nand a key bug in the playout{} code was fixed.\nInstead of choosing actions according to the policy from the neural network,\nthe playout{} code erroneously always selected the last legal action.\nGiving the program a \\texttt{break} made a huge difference.\n\n\n\\subsection{Comparison with Kriegspiel}\n\nA comparison between RBC and Kriegspiel chess\n\\citep{ciancarini2009mcts, parker2010paranoia, richards2012reasoning}\nmay be worthwhile. Kriegspiel chess \nalso introduces uncertainty about the opposing pieces\nbut lacks an explicit sensing mechanism.\nInstead, information is gathered solely from\ncaptures, check notifications, and illegal move attempts.\nIn Kriegspiel, illegal moves are rejected and the player is allowed to choose a new\nmove with their increased information about the board state,\nwhich entangles the positional and informational aspects of the game.\nIn contrast, sensing in RBC gives players direct control\nover the amount and character of the information they possess.\n\nAnother significant difference comes from the mechanics related to check.\nCapturing pieces and putting the opposing king into check\nhave benefits in both games:\ncapturing pieces leads to a material advantage,\nand check often precedes checkmate.\nIn Kriegspiel, however, both capturing and giving check\nalso provide the opponent with information.\nIn RBC, while capturing does give the opponent information,\nputting their king into check does not,\nwhich makes sneak attacks more viable.\n\n\n\\subsection{Games played}\n\nThe games that were played in order to produce Table~\\ref{tab:baseline-bots}, Table~\\ref{tab:caution-and-paranoia}, and Table~\\ref{tab:exploration}\nare available for download from \\url{https:\/\/github.com\/w-hat\/penumbra}.\n\n\n\\begin{table*}[ht]\n\\caption{Top-1 action accuracy across headsets.}\n\\label{tab:cross-eval-top-1}\n\\vspace{0.1in}\n\\begin{center}\n\\rowcolors{2}{gray!10}{white}\n\\setlength{\\tabcolsep}{3.2pt}\n\\begin{tabular}{ccccccccccccccc}\n \\crossevaltableheader\n \\texttt{All}{} & \\bf 33.6 & 41.5 & 40.5 & 32.3 & 43.7 & 44.1 & 41.5 & 20.3 & 31.9 & 37.1 & 36.6 & 21.9 & 41.4 & 3.8 \\\\\n \\texttt{Top}{} & 30.5 & \\bf 43.1 & 44.5 & 32.3 & 43.9 & 44.4 & 40.3 & 19.4 & 31.6 & 22.4 & 26.0 & 18.5 & 15.6 & 3.3 \\\\\n \\texttt{StrangeFish}{} & 27.3 & 40.5 & \\bf 45.9 & 30.3 & 36.9 & 38.8 & 33.2 & 17.9 & 27.7 & 21.0 & 23.6 & 17.5 & 14.6 & 3.3 \\\\\n \\texttt{LaSalle}{} & 26.0 & 34.4 & 34.6 & \\bf 36.4 & 34.1 & 34.5 & 33.2 & 17.2 & 24.6 & 22.6 & 26.1 & 15.5 & 11.2 & 3.4 \\\\\n \\texttt{Dyn.Entropy} & 27.9 & 37.9 & 35.2 & 27.5 & \\bf 50.0 & 40.1 & 37.4 & 18.0 & 32.6 & 18.3 & 22.6 & 16.9 & 13.8 & 3.3 \\\\\n \\texttt{Oracle}{} & 28.8 & 38.4 & 36.3 & 29.1 & 41.2 & \\bf 49.3 & 35.1 & 17.2 & 29.5 & 19.9 & 26.6 & 17.0 & 11.9 & 3.4 \\\\\n \\focuswbernar{} & 28.8 & 38.1 & 33.9 & 30.7 & 42.9 & 38.6 & \\bf 45.2 & 17.9 & 29.3 & 18.4 & 23.1 & 16.7 & 11.6 & 3.3 \\\\\n \\texttt{Marmot}{} & 22.4 & 29.4 & 28.6 & 24.0 & 31.4 & 29.3 & 29.7 & \\bf 24.6 & 25.0 & 16.4 & 15.5 & 15.4 & 11.1 & 3.4 \\\\\n \\texttt{Genetic}{} & 24.0 & 32.5 & 30.3 & 24.1 & 39.8 & 35.0 & 32.3 & 16.9 & \\bf 40.4 & 15.1 & 15.7 & 15.1 & 7.8 & 3.4 \\\\\n \\texttt{Zugzwang}{} & 20.5 & 21.8 & 23.2 & 20.5 & 20.4 & 23.5 & 17.3 & 11.7 & 12.3 & \\bf 47.0 & 34.6 & 14.0 & 10.9 & 3.2 \\\\\n \\texttt{Trout}{} & 22.8 & 25.1 & 26.0 & 24.0 & 23.0 & 27.8 & 21.3 & 12.5 & 14.4 & 36.1 & \\bf 41.8 & 14.9 & 14.7 & 3.7 \\\\\n \\texttt{Human}{} & 23.8 & 30.1 & 30.6 & 24.9 & 31.4 & 30.1 & 28.4 & 16.8 & 24.1 & 24.7 & 24.5 & \\bf 24.9 & 12.5 & 3.3 \\\\\n \\texttt{Attacker}{} & 10.6 & 11.7 & 11.0 & 9.8 & 11.4 & 11.6 & 12.4 & 8.6 & 8.8 & 9.2 & 9.0 & 6.7 & \\bf 45.1 & 4.4 \\\\\n \\texttt{Random}{} & 14.0 & 16.7 & 16.2 & 14.0 & 16.5 & 17.8 & 16.8 & 9.4 & 11.4 & 15.7 & 16.5 & 10.4 & 10.4 & \\bf 4.5 \\\\\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\n\\begin{table*}[ht]\n\\caption{Top-5 action accuracy across headsets.}\n\\label{tab:cross-eval-top-5}\n\\vspace{0.1in}\n\\begin{center}\n\\rowcolors{2}{gray!10}{white}\n\\setlength{\\tabcolsep}{3pt}\n\\begin{tabular}{ccccccccccccccc}\n \\crossevaltableheader\n \\texttt{All}{} & \\bf 62.0 & 72.3 & 71.0 & 66.6 & 74.9 & 75.8 & 72.5 & 51.3 & 65.3 & 65.5 & 61.8 & 48.1 & 59.1 & 18.2 \\\\\n \\texttt{Top}{} & 58.8 & \\bf 73.4 & 73.4 & 66.4 & 75.5 & 76.4 & 72.0 & 50.1 & 64.7 & 48.5 & 52.2 & 45.4 & 35.3 & 16.3 \\\\\n \\texttt{StrangeFish}{} & 56.8 & 72.2 & \\bf 74.3 & 64.7 & 72.6 & 74.2 & 67.5 & 48.5 & 62.1 & 46.5 & 50.1 & 43.8 & 41.4 & 15.7 \\\\\n \\texttt{LaSalle}{} & 56.8 & 68.8 & 68.4 & \\bf 68.2 & 69.8 & 70.6 & 68.3 & 48.7 & 58.9 & 51.5 & 56.6 & 43.6 & 30.8 & 16.8 \\\\\n \\texttt{Dyn.Entropy} & 55.5 & 69.1 & 66.9 & 60.9 & \\bf 76.7 & 72.3 & 69.6 & 47.2 & 64.4 & 38.4 & 47.2 & 42.7 & 41.4 & 16.6 \\\\\n \\texttt{Oracle}{} & 56.7 & 70.4 & 68.9 & 61.9 & 74.1 & \\bf 77.4 & 69.1 & 45.6 & 63.4 & 43.9 & 50.6 & 43.1 & 34.4 & 16.4 \\\\\n \\focuswbernar{} & 57.0 & 70.0 & 67.3 & 64.7 & 74.0 & 71.3 & \\bf 73.6 & 49.3 & 63.3 & 43.3 & 50.2 & 44.0 & 29.8 & 17.0 \\\\\n \\texttt{Marmot}{} & 53.4 & 65.0 & 64.0 & 58.7 & 68.6 & 66.1 & 65.2 & \\bf 55.4 & 59.3 & 43.0 & 46.2 & 42.6 & 32.2 & 16.2 \\\\\n \\texttt{Genetic}{} & 52.6 & 65.8 & 64.2 & 58.2 & 71.3 & 69.7 & 65.8 & 45.4 & \\bf 70.5 & 37.4 & 41.6 & 40.9 & 27.0 & 16.2 \\\\\n \\texttt{Zugzwang}{} & 42.8 & 45.0 & 45.4 & 46.0 & 42.1 & 47.5 & 42.4 & 32.5 & 32.8 & \\bf 71.6 & 57.9 & 35.3 & 29.0 & 15.4 \\\\\n \\texttt{Trout}{} & 49.3 & 54.5 & 54.4 & 53.9 & 53.7 & 57.2 & 52.5 & 38.4 & 42.2 & 62.9 & \\bf 63.4 & 38.7 & 40.9 & 18.1 \\\\\n \\texttt{Human}{} & 53.5 & 62.9 & 62.4 & 58.3 & 64.7 & 64.6 & 62.5 & 45.9 & 55.7 & 53.7 & 53.4 & \\bf 51.9 & 33.4 & 16.3 \\\\\n \\texttt{Attacker}{} & 35.2 & 39.5 & 38.8 & 34.7 & 40.8 & 40.2 & 39.7 & 31.1 & 33.8 & 33.0 & 32.4 & 28.4 & \\bf 61.9 & 20.0 \\\\\n \\texttt{Random}{} & 39.7 & 45.6 & 44.5 & 41.2 & 46.4 & 47.9 & 46.0 & 31.8 & 37.7 & 41.3 & 42.1 & 31.5 & 30.3 & \\bf 20.7 \\\\\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\\begin{table*}[ht]\n\\caption{Winner accuracy across headsets.}\n\\label{tab:cross-eval-winner-acc}\n\\vspace{0.1in}\n\\begin{center}\n\\rowcolors{2}{gray!10}{white}\n\\setlength{\\tabcolsep}{3pt}\n\\begin{tabular}{ccccccccccccccc}\n \\crossevaltableheader\n \\texttt{All}{} & \\bf 74.8 & 73.4 & 76.6 & 68.4 & 74.6 & 76.3 & 67.8 & 79.3 & 74.1 & 76.7 & 76.7 & 72.1 & 79.6 & 91.3 \\\\\n \\texttt{Top}{} & 63.1 & \\bf 82.4 & \\bf 86.7 & 68.0 & 76.0 & 80.6 & 66.3 & 65.6 & 73.2 & 49.1 & 55.1 & 52.7 & 47.7 & 29.9 \\\\\n \\texttt{StrangeFish}{} & 64.1 & 82.1 & 86.6 & 67.5 & 76.2 & 80.6 & 65.2 & 65.8 & 72.7 & 50.1 & 55.4 & 55.8 & 60.9 & 37.3 \\\\\n \\texttt{LaSalle}{} & 69.9 & 77.3 & 80.7 & \\bf 69.7 & 76.0 & 78.8 & 67.1 & 73.5 & 75.9 & 65.1 & 68.3 & 62.4 & 71.9 & 60.7 \\\\\n \\texttt{Dyn.Entropy} & 67.8 & 80.4 & 85.0 & 69.4 & \\bf 78.9 & 80.8 & 67.0 & 71.7 & 75.6 & 58.5 & 61.1 & 61.5 & 65.0 & 47.3 \\\\\n \\texttt{Oracle}{} & 66.0 & 82.0 & 86.4 & 69.4 & 77.3 & \\bf 81.4 & 66.9 & 69.2 & 75.3 & 53.5 & 58.2 & 57.9 & 59.5 & 39.7 \\\\\n \\focuswbernar{} & 71.3 & 78.5 & 82.5 & 69.3 & 77.2 & 79.5 & \\bf 68.3 & 75.3 & 76.8 & 64.0 & 65.8 & 66.9 & 72.6 & 68.9 \\\\\n \\texttt{Marmot}{} & 70.6 & 67.7 & 70.1 & 64.5 & 70.9 & 72.3 & 65.7 & \\bf 80.8 & 72.9 & 73.9 & 73.4 & 69.2 & 77.0 & 72.6 \\\\\n \\texttt{Genetic}{} & 67.1 & 80.0 & 84.1 & 68.6 & 77.1 & 80.3 & 67.1 & 71.5 & \\bf 77.9 & 56.5 & 60.8 & 60.9 & 62.1 & 44.3 \\\\\n \\texttt{Zugzwang}{} & 68.3 & 54.6 & 54.0 & 59.7 & 61.4 & 60.0 & 61.7 & 76.1 & 64.4 & \\bf 80.1 & 78.8 & 72.5 & 78.5 & 88.9 \\\\\n \\texttt{Trout}{} & 69.7 & 58.6 & 59.1 & 60.9 & 63.6 & 64.4 & 63.4 & 77.9 & 69.0 & 77.9 & \\bf 79.8 & 72.4 & 78.6 & 87.3 \\\\\n \\texttt{Human}{} & 71.1 & 64.8 & 66.0 & 63.8 & 67.7 & 68.0 & 65.1 & 76.6 & 70.4 & 74.6 & 76.2 & \\bf 73.3 & 77.7 & 90.1 \\\\\n \\texttt{Attacker}{} & 63.7 & 47.1 & 46.1 & 55.6 & 53.8 & 50.9 & 56.5 & 71.4 & 59.5 & 75.0 & 73.4 & 71.5 & \\bf 80.1 & 92.7 \\\\\n \\texttt{Random}{} & 51.0 & 25.5 & 20.9 & 43.4 & 34.8 & 28.5 & 46.5 & 54.9 & 38.2 & 65.7 & 56.6 & 62.5 & 69.5 & \\bf 94.7 \\\\\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\n\\begin{figure*}[ht]\n\\centering\n\\begin{minipage}{1\\linewidth}\n\\includegraphics[width=1\\linewidth]{images\/game-length-distributions-8.png}\n\\end{minipage}\n\\caption{\nThe historical game length distributions are shown for the\ndata used to train each of the headsets.\nOn average, the games from \\texttt{Attacker}{} were the shortest,\nand the games from \\texttt{StockyInference}{} where the longest.\n}\n\\label{fig:game-length-distribution}\n\\end{figure*}\n\n\n\n\\begin{table*}[ht]\n\\caption{%\nSynopsis feature descriptions, saliency estimates, and ablation study results.\n}\n\\label{tab:feature-description-1}\n\\vspace{0.1in}\n\\begin{center}\n\\begin{tabular}{clrrrrrr}\n \\featuretableheader\n 0 & East side (constant) & 3.24 & 0.86 & 0.80 & 0.91 & 34 & 0.27 \\\\\n 1 & West side (constant) & 3.12 & 0.82 & 0.84 & 0.81 & 43 & 0.00 \\\\\n 2 & South side (constant) & 3.24 & 0.86 & 0.78 & 0.94 & 33 & -0.08 \\\\\n 3 & North side (constant) & 3.22 & 0.82 & 0.89 & 0.75 & 44 & 0.27 \\\\\n 4 & Rank 1 (constant) & 3.15 & 0.80 & 0.81 & 0.76 & 50 & -0.03 \\\\\n 5 & Rank 8 (constant) & 3.12 & 0.82 & 0.85 & 0.61 & 42 & -0.07 \\\\\n 6 & A-file (constant) & 3.04 & 0.78 & 0.81 & 0.53 & 59 & 0.18 \\\\\n 7 & H-file (constant) & 3.08 & 0.80 & 0.83 & 0.58 & 51 & 0.15 \\\\\n 8 & Dark squares (constant) & 3.08 & 0.82 & 0.81 & 0.82 & 45 & 0.21 \\\\\n 9 & Light squares (constant) & 3.03 & 0.78 & 0.77 & 0.78 & 61 & 0.01 \\\\\n 10 & Stage (move or sense) & 7.80 & 3.14 & 2.82 & 3.45 & 0 & -0.19 \\\\\n 11 & Not own piece & 5.40 & 1.43 & 2.66 & 1.13 & 8 & -0.29 \\\\\n 12 & Own pawns & 4.16 & 1.14 & 1.07 & 1.73 & 14 & 0.01 \\\\\n 13 & Own knights & 3.68 & 0.93 & 0.91 & 1.63 & 22 & 0.09 \\\\\n 14 & Own bishops & 3.46 & 0.89 & 0.87 & 1.63 & 27 & 0.02 \\\\\n 15 & Own rooks & 3.67 & 0.94 & 0.93 & 1.12 & 21 & 0.03 \\\\\n 16 & Own queens & 3.28 & 0.87 & 0.85 & 2.24 & 32 & 0.06 \\\\\n 17 & Own king & 3.14 & 0.79 & 0.79 & 0.88 & 52 & 0.10 \\\\\n 18 & Definitely not opposing pieces & 3.85 & 1.14 & 1.03 & 1.21 & 13 & -0.10 \\\\\n 19 & Definitely opposing pawns & 3.49 & 1.01 & 1.02 & 0.73 & 17 & -0.08 \\\\\n 20 & Definitely opposing knights & 3.30 & 0.93 & 0.93 & 0.61 & 23 & -0.05 \\\\\n 21 & Definitely opposing bishops & 3.21 & 0.88 & 0.88 & 0.59 & 29 & -0.02 \\\\\n 22 & Definitely opposing rooks & 3.04 & 0.81 & 0.82 & 0.38 & 47 & 0.02 \\\\\n 23 & Definitely opposing queens & 3.15 & 0.85 & 0.85 & 0.60 & 35 & -0.10 \\\\\n 24 & Definitely opposing king & 3.60 & 0.92 & 0.91 & 2.27 & 26 & 0.04 \\\\\n 25 & Possibly not opposing pieces & 5.22 & 1.54 & 1.34 & 1.56 & 5 & -0.04 \\\\\n 26 & Possibly opposing pawns & 3.50 & 0.92 & 0.92 & 0.93 & 24 & 0.06 \\\\\n 27 & Possibly opposing knights & 2.97 & 0.77 & 0.77 & 0.81 & 67 & 0.07 \\\\\n 28 & Possibly opposing bishops & 2.95 & 0.75 & 0.74 & 0.89 & 70 & 0.09 \\\\\n 29 & Possibly opposing rooks & 3.01 & 0.75 & 0.76 & 0.63 & 69 & -0.18 \\\\\n 30 & Possibly opposing queens & 3.05 & 0.78 & 0.77 & 1.05 & 57 & -0.07 \\\\\n 31 & Possibly opposing kings & 4.86 & 1.48 & 1.43 & 2.64 & 7 & -0.04 \\\\\n 32 & Last from & 2.77 & 0.72 & 0.72 & 0.83 & 76 & -0.11 \\\\\n 33 & Last to & 3.28 & 0.96 & 0.96 & 1.40 & 19 & 0.02 \\\\\n 34 & Last own capture & 3.10 & 0.83 & 0.83 & 1.17 & 40 & 0.07 \\\\\n 35 & Last opposing capture & 8.04 & 2.83 & 2.82 & 6.51 & 1 & -0.08 \\\\\n 36 & Definitely attackable & 2.72 & 0.70 & 0.62 & 0.78 & 84 & -0.06 \\\\\n 37 & Definitely attackable somehow & 2.73 & 0.71 & 0.65 & 0.78 & 80 & -0.02 \\\\\n 38 & Possibly attackable & 3.02 & 0.81 & 0.71 & 0.92 & 48 & 0.19 \\\\\n 39 & Definitely doubly attackable & 2.67 & 0.66 & 0.63 & 0.80 & 92 & -0.11 \\\\\n 40 & Definitely doubly attackable somehow & 2.66 & 0.69 & 0.67 & 0.80 & 88 & 0.14 \\\\\n 41 & Possibly doubly attackable & 2.71 & 0.75 & 0.73 & 0.83 & 71 & -0.26 \\\\\n 42 & Definitely attackable by pawns & 3.54 & 0.92 & 0.92 & 2.38 & 25 & 0.13 \\\\\n 43 & Possibly attackable by pawns & 3.11 & 0.78 & 0.78 & 0.95 & 58 & -0.10 \\\\\n 44 & Definitely attackable by knights & 2.91 & 0.72 & 0.71 & 0.84 & 77 & 0.24 \\\\\n 45 & Definitely attackable by bishops & 2.60 & 0.64 & 0.61 & 0.80 & 95 & 0.15 \\\\\n 46 & Possibly attackable by bishops & 2.60 & 0.68 & 0.64 & 0.85 & 89 & -0.07 \\\\\n 47 & Definitely attackable by rooks & 2.63 & 0.65 & 0.64 & 0.75 & 93 & 0.07 \\\\\n 48 & Possibly attackable by rooks & 2.74 & 0.70 & 0.69 & 0.77 & 81 & 0.00 \\\\\n 49 & Possibly attackable without king & 2.72 & 0.70 & 0.63 & 0.79 & 82 & 0.19 \\\\\n 50 & Possibly attackable without pawns & 2.63 & 0.67 & 0.62 & 0.73 & 90 & 0.17 \\\\\n 51 & Definitely attackable by opponent & 3.25 & 0.87 & 0.91 & 0.77 & 31 & -0.03 \\\\\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\\begin{table*}[ht]\n\\caption{%\nSynopsis feature descriptions, saliency estimates, and ablation study results (continued).\n}\n\\label{tab:feature-description-2}\n\\vspace{0.1in}\n\\begin{center}\n\\begin{tabular}{clrrrrrr}\n \\featuretableheader\n 52 & Possibly attackable by opponent & 3.15 & 0.84 & 0.90 & 0.81 & 37 & 0.06 \\\\\n 53 & Definitely doubly attackable by opp. & 2.56 & 0.65 & 0.66 & 0.56 & 94 & -0.01 \\\\\n 54 & Possibly doubly attackable by opp. & 2.67 & 0.71 & 0.73 & 0.66 & 79 & -0.13 \\\\\n 55 & Definitely attackable by opp. pawns & 3.10 & 0.87 & 0.87 & 1.69 & 30 & 0.12 \\\\\n 56 & Possibly attackable by opp. pawns & 2.84 & 0.77 & 0.77 & 1.10 & 62 & 0.30 \\\\\n 57 & Definitely attackable by opp. knights & 2.78 & 0.69 & 0.70 & 0.59 & 87 & 0.21 \\\\\n 58 & Possibly attackable by opp. knights & 2.14 & 0.52 & 0.51 & 0.55 & 102 & 0.08 \\\\\n 59 & Definitely attackable by opp. bishops & 2.66 & 0.67 & 0.67 & 0.63 & 91 & 0.09 \\\\\n 60 & Possibly attackable by opp. bishops & 2.16 & 0.53 & 0.52 & 0.56 & 101 & -0.09 \\\\\n 61 & Definitely attackable by opp. rooks & 2.77 & 0.70 & 0.72 & 0.48 & 83 & -0.19 \\\\\n 62 & Possibly attackable by opp. rooks & 2.10 & 0.51 & 0.53 & 0.47 & 103 & 0.20 \\\\\n 63 & Possibly attackable by opp. w\/o king & 2.55 & 0.64 & 0.63 & 0.64 & 96 & -0.17 \\\\\n 64 & Possibly attackable by opp. w\/o pawns & 2.45 & 0.62 & 0.61 & 0.62 & 97 & -0.13 \\\\\n 65 & Possibly safe opposing king & 6.04 & 2.06 & 2.01 & 3.38 & 2 & 0.07 \\\\\n 66 & Squares the opponent may move to & 2.40 & 0.60 & 0.60 & 0.60 & 98 & 0.01 \\\\\n 67 & Possible castle state for opponent & 3.09 & 0.79 & 0.79 & 0.72 & 53 & 0.00 \\\\\n 68 & Known squares & 4.94 & 1.52 & 1.67 & 1.45 & 6 & 0.13 \\\\\n 69 & Own king's king-neighbors & 3.10 & 0.78 & 0.77 & 0.93 & 56 & 0.14 \\\\\n 70 & Own king's knight-neighbors & 2.82 & 0.71 & 0.70 & 0.91 & 78 & 0.31 \\\\\n 71 & Definitely opp. knights near king & 3.09 & 0.79 & 0.79 & 1.64 & 54 & 0.13 \\\\\n 72 & Possibly opp. knights near king & 5.13 & 1.72 & 1.72 & 2.77 & 4 & -0.01 \\\\\n 73 & Own king's bishop-neighbors & 2.74 & 0.69 & 0.68 & 0.86 & 85 & -0.10 \\\\\n 74 & Definitely opp. bishops near king & 3.04 & 0.79 & 0.79 & 0.89 & 55 & 0.23 \\\\\n 75 & Possibly opp. bishops near king & 5.23 & 1.75 & 1.75 & 2.41 & 3 & -0.11 \\\\\n 76 & Own king's rook-neighbors & 2.76 & 0.69 & 0.68 & 0.83 & 86 & -0.13 \\\\\n 77 & Definitely opp. rooks near king & 3.10 & 0.81 & 0.81 & 0.87 & 49 & 0.31 \\\\\n 78 & Possibly opp. rooks near king & 4.45 & 1.40 & 1.40 & 1.55 & 10 & 0.05 \\\\\n 79 & All own pieces & 5.26 & 1.36 & 1.09 & 2.47 & 11 & -0.01 \\\\\n 80 & Definitely empty squares & 3.69 & 0.96 & 1.05 & 0.84 & 20 & -0.13 \\\\\n 81 & May castle eventually & 3.11 & 0.81 & 0.81 & 1.26 & 46 & 0.24 \\\\\n 82 & Possibly may castle & 3.05 & 0.77 & 0.77 & 0.63 & 68 & 0.05 \\\\\n 83 & Definitely may castle & 3.04 & 0.77 & 0.77 & 0.87 & 66 & 0.12 \\\\\n 84 & Own queens' rook-neighbors & 2.20 & 0.54 & 0.53 & 0.63 & 100 & 0.04 \\\\\n 85 & Own queens' bishop-neighbors & 2.33 & 0.57 & 0.57 & 0.67 & 99 & 0.06 \\\\\n 86 & Previous definitely not opp. pieces & 3.82 & 0.88 & 0.87 & 0.89 & 28 & -0.30 \\\\\n 87 & Previous definitely opp. pawns & 4.16 & 1.16 & 1.18 & 0.88 & 12 & 0.14 \\\\\n 88 & Previous definitely opp. knights & 3.02 & 0.77 & 0.77 & 0.72 & 63 & 0.12 \\\\\n 89 & Previous definitely opp. bishops & 2.92 & 0.73 & 0.73 & 0.70 & 75 & -0.02 \\\\\n 90 & Previous definitely opp. rooks & 3.60 & 1.01 & 1.02 & 0.56 & 16 & 0.05 \\\\\n 91 & Previous definitely opp. queens & 3.93 & 1.11 & 1.11 & 1.00 & 15 & 0.22 \\\\\n 92 & Previous definitely opp. king & 3.33 & 0.83 & 0.82 & 1.58 & 38 & -0.04 \\\\\n 93 & Previous possibly not opp. pieces & 4.40 & 1.43 & 1.21 & 1.47 & 9 & -0.05 \\\\\n 94 & Previous possibly opp. pawns & 3.27 & 0.83 & 0.82 & 0.92 & 39 & 0.21 \\\\\n 95 & Previous possibly opp. knights & 3.04 & 0.78 & 0.78 & 0.73 & 60 & 0.22 \\\\\n 96 & Previous possibly opp. bishops & 3.10 & 0.74 & 0.74 & 0.78 & 73 & 0.16 \\\\\n 97 & Previous possibly opp. rooks & 2.94 & 0.77 & 0.79 & 0.45 & 64 & 0.10 \\\\\n 98 & Previous possibly opp. queens & 3.02 & 0.74 & 0.74 & 0.81 & 74 & -0.07 \\\\\n 99 & Previous possibly opp. king & 3.14 & 0.83 & 0.82 & 1.15 & 41 & 0.04 \\\\\n 100 & Previous last from & 2.85 & 0.75 & 0.74 & 0.85 & 72 & -0.04 \\\\\n 101 & Previous last to & 3.36 & 1.00 & 1.00 & 1.50 & 18 & 0.21 \\\\\n 102 & Previous own capture & 3.05 & 0.84 & 0.84 & 1.13 & 36 & -0.09 \\\\\n 103 & Previous opposing capture & 2.93 & 0.77 & 0.77 & 1.05 & 65 & 0.05 \\\\\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\n\n\n\\section*{Checklist}\n\n\n\\begin{enumerate}\n\n\\item For all authors...\n\\begin{enumerate}\n \\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?\n \\answerYes{The synopsis abstraction makes DSMCP tractable for large games, and the experiments show that the stochastic bandit algorithm contributes significantly to the algorithm's performance.}\n \\item Did you describe the limitations of your work?\n \\answerYes{The paper is upfront that DSMCP does not explicitly seek a Nash equilibrium.}\n \\item Did you discuss any potential negative societal impacts of your work?\n \\answerYes{The broader impact section mentions how algorithms that are capable of planning in large domains involving uncertainty (such as the real world) could have harmful applications.}\n \\item Have you read the ethics review guidelines and ensured that your paper conforms to them?\n \\answerYes{}\n\\end{enumerate}\n\n\\item If you are including theoretical results...\n\\begin{enumerate}\n \\item Did you state the full set of assumptions of all theoretical results?\n \\answerNA{The main paper does not contain theoretical results, though the appendix may include statements about of the asymptotic behavior of the stochastic bandit algorithm and of DSMCP.}\n\t\\item Did you include complete proofs of all theoretical results?\n \\answerNA{The main paper does not contain theoretical results, though the appendix may include of proofs the theoretical statements in it.}\n\\end{enumerate}\n\n\\item If you ran experiments...\n\\begin{enumerate}\n \\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?\n \\answerYes{While it is not released yet, the games produced in the experiments will be shared and the algorithms will be contributed to an open source suite of algorithms.}\n \\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?\n \\answerNo{For example, the random seeds and precise makeup of the \\texttt{Top}{} dataset are not provided, but the paper does make a best-effort to provide as many details as possible which are likely to be sufficient to reproduce the results.}\n\t\\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?\n \\answerYes{The experiments contains 95\\% confidence intervals for the provided Elo ratings.}\n\t\\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?\n \\answerYes{Section~\\ref{sec:training-procedure} mentions the four RTX 2080 Ti GPUs used.}\n\\end{enumerate}\n\n\\item If you are using existing assets (e.g., code, data, models) or curating\/releasing new assets...\n\\begin{enumerate}\n \\item If your work uses existing assets, did you cite the creators?\n \\answerYes{Section~\\ref{sec:reconnaissance-blind-chess} cites the creators of reconnaissance blind chess, and Section~\\ref{sec:experiments} cites the authors of the baseline programs used in the experiments.}\n \\item Did you mention the license of the assets?\n \\answerNo{It looks like Johns Hopkins University has not provided an explicit license for the board game or the games played on the website.}\n \\item Did you include any new assets either in the supplemental material or as a URL?\n \\answerYes{Additional assets will be provided in the supplementary material.}\n \\item Did you discuss whether and how consent was obtained from people whose data you're using\/curating?\n \\answerNo{The paper does not explicitly discuss receiving consent for participating in the RBC contest or training on RBC games, which seems okay.}\n \\item Did you discuss whether the data you are using\/curating contains personally identifiable information or offensive content?\n \\answerNo{Whether or not a person or program can be identified by what moves are chosen in a game of RBC is an interesting research question, but it is tangential and seems acceptable to omit in this paper.}\n\\end{enumerate}\n\n\\item If you used crowdsourcing or conducted research with human subjects...\n\\begin{enumerate}\n \\item Did you include the full text of instructions given to participants and screenshots, if applicable?\n \\answerNA{}\n \\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?\n \\answerNA{}\n \\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?\n \\answerNA{}\n\\end{enumerate}\n\n\\end{enumerate}\n\n\n\n\n\n\n\\section{Introduction}\n\\label{sec:introduction}\n\nChoosing a Nash equilibrium strategy\nis rational when the opponent is able to\nidentify and exploit suboptimal behavior \\citep{bowling2001rational}.\nHowever, not all opponents are so responsive,\nand computing a Nash equilibrium is intractable for many games.\nThis paper presents deep synoptic Monte Carlo planning (DSMCP), an\nalgorithm for large imperfect information games that\nseeks a best-response strategy rather than a Nash equilibrium strategy.\n\nWhen opponents use fixed policies,\nan imperfect information game may be viewed as a\npartially observable Markov decision process (POMDP)\nwith the opponents as part of the environment.\nDSMCP treats playing against specific opponents as related offline\nreinforcement learning (RL) problems and exploits predictability.\nImportantly,\nthe structure of having opponents with imperfect information is preserved\nin order to account for their uncertainty.\n\n\\begin{figure}[t]\n\\begin{minipage}{.49\\linewidth}\n\\input{first_figure.tex}\n\\end{minipage}%\n\\begin{minipage}{0.02\\linewidth}\n\\,\n\\end{minipage}%\n\\begin{minipage}{.49\\linewidth}\n\\begin{center}\n\\begin{minipage}{0.5\\columnwidth}\n \\centering\n \\includegraphics[width=0.96\\linewidth]{images\/game-120174-replay.png}\n (a)\n\\end{minipage}%\n\\begin{minipage}{0.5\\columnwidth}\n \\centering\n \\includegraphics[width=0.96\\linewidth]{images\/game-124718-replay.png}\n (b)\n\\end{minipage}%\n\\end{center}\n\\caption{\nPlaying RBC well requires balancing risks and rewards.\n(a) On the left, \\texttt{Penumbra}{} moved the white queen to \\texttt{g8}.\nAfter sensing at \\texttt{d2}, Black could infer that the white queen\noccupied one of 25 squares.\nThat uncertainty allowed the white queen\nto survive and capture the black king on the next turn.\n(b) On the right, \\texttt{Penumbra}{} moved the black queen to \\texttt{h2}.\nIn this case, the opponent detected and captured the black queen.\nThe games are available online at\n\\url{https:\/\/rbc.jhuapl.edu\/games\/120174} and\n\\url{https:\/\/rbc.jhuapl.edu\/games\/124718}.\n}\n\\label{fig:queen-risks}\n\\end{minipage}\n\\end{figure}\n\nDSMCP uses sampling to break the\n``curse of dimensionality'' \\citep{pineau2006anytime} in three ways:\nsampling possible histories with a particle filter,\nsampling possible futures with\nupper confidence bound tree search (UCT) \\citep{kocsis2006bandit},\nand sampling possible world states{} within each information state{} uniformly.\nIt represents information state{}s with a generally-applicable\nstochastic abstraction technique that\nproduces a ``synopsis'' from sampled world states{}.\nThis paper assesses DSMCP on reconnaissance blind chess (RBC),\na large imperfect information chess variant.\n\n\n\\section{Background}\n\\label{sec:related-work}\n\nSignificant progress has been made in recent years in\nboth perfect and imperfect information settings.\nFor example, using deep neural networks to guide UCT\nhas enabled monumental achievements in\nabstract strategy games as well as computer games\n\\citep{silver2016mastering, silver2017mastering,\n silver2017chess, schrittwieser2019mastering,\n wu2020accelerating, tomasev2020assessing}.\nThis work employs deep learning in a similar fashion. \n\nRecent advancements in imperfect information games are also remarkable.\nSeveral programs have reached superhuman performance in Poker\n\\citep{moravcik2017deepstack, brown2018superhuman, brown2019superhuman, brown2020combining}.\nIn particular, ReBeL \\citep{brown2020combining}\ncombines RL and search by converting imperfect information games\ninto continuous state space perfect information games\nwith public belief states as nodes.\nThis approach is powerful, but it relies on public knowledge and fails to scale to\ngames with hidden actions and substantial private information, such as RBC.\n\nInformation set search \\citep{parker2006overconfidence, parker2010paranoia}\nis a limited-depth algorithm for imperfect information games\nthat operates on information states{} according to a minimax rule.\nThis algorithm was designed for and evaluated\non Kriegspiel chess, which is comparable to RBC.\n\nPartially observable Monte Carlo planning (POMCP) \\citep{silver2010pomcp}\nachieves optimal policies for POMDPs\nby tracking approximate belief states with an unweighted particle filter\nand planning with a variant of UCT on a search tree of histories.\nIn practice, POMCP can suffer from particle depletion,\nrequiring a domain-specific workaround.\nThis work combines an unweighted particle filter\nwith a novel information state{} abstraction technique\nwhich increases sample quality and supports deep learning.\n\nSmooth UCT \\citep{heinrich2015smoothuct} and\ninformation set Monte Carlo tree search (ISMCTS) \\citep{cowling2012ismcts}\nmay be viewed as multi-agent versions of POMCP.\nThese two algorithms for playing extensive-form games\nbuild search trees (for each player) of information states{}.\nThese two algorithms and DSMCP all\nperform playouts from determinized states\nthat are accurate from the current player's perspective,\neffectively granting the opponent extra information.\nStill, Smooth UCT approached a Nash equilibrium\nby incorporating a stochastic bandit algorithm into its tree search.\nDSMCP uses a similar bandit algorithm that mixes in\na learned policy during early node visits.\n\nWhile adapting perfect information algorithms has performed surprisingly well\nin some imperfect information settings \\citep{long2010pimc},\nthe theoretical guarantees of\nvariants of counterfactual regret minimization (CFR)\n\\citep{neller2013introduction, brown2018deep}\nare enticing.\nOnline outcome sampling (OOS) \\citep{lisy2015online}\nextends Monte Carlo counterfactual regret minimization (MCCFR) \\citep{lanctot2009regret}\nby building its search tree incrementally and\ntargeting playouts to relevant parts of the tree.\nOOS draws samples from the beginning of the game.\nMCCFR and OOS are theoretically guaranteed to converge\nto a Nash equilibrium strategy.\nSpecifically, CFR-based algorithms produce mixed strategies\nwhile DSMCP relies on incidental stochasticity.\n\nNeural fictitious self-play (NFSP) \\citep{heinrich2016deep} is an RL algorithm\nfor training two neural networks for imperfect information games.\nExperiments with NFSP employed compact observations embeddings of information states{}.\nDSMCP offers a generic technique for embedding information states{} in large games.\nDual sequential Monte Carlo (DualSMC) \\citep{wang2019dualsmc}\nestimates belief states and plans\nin a continuous domain via sequential Monte Carlo with heuristics.\n\n\n\\section{Reconnaissance blind chess}\n\\label{sec:reconnaissance-blind-chess}\n\nReconnaissance blind chess (RBC)\n\\citep{newman2016reconnaissance, markowitz2018complexity, pmlr-v123-gardner20a}\nis a chess variant that incorporates\nuncertainty about the placement of the opposing pieces\nalong with a private sensing mechanism.\nAs shown in Figure~\\ref{fig:log-information-set-graph},\nRBC players are often faced with thousands of possible game states,\nand reducing uncertainty increases the odds of winning.\n\n\n\\paragraph{Game rules}\n\nPieces move in the same way in RBC as in chess.\nPlayers cannot directly observe the movement of the opposing pieces.\nHowever, at the beginning of each turn,\nplayers may view the ground truth of any $3\\stimes3$ patch of the board.\nThe information gained from the sensing action remains private to that player.\nPlayers are also informed of the location of all captures,\nbut not the identity of capturing pieces.\nWhen a requested move is illegal,\nthe move is substituted with the closest legal move\nand the player is notified of the substitution.\nFor example, in Figure~\\ref{fig:queen-risks}~(a),\nif Black attempted to move the rook from \\texttt{h8} to \\texttt{f8},\nthe rook would capture the queen on \\texttt{g8} and stop there instead.\nPlayers are always able to track the placement of their own pieces.\nCapturing the opposing king wins the game, and\nplayers are not notified about check.\nPassing and moving into check are legal.\n\n\n\\paragraph{Official competition}\n\nThis paper introduces the program \\texttt{Penumbra}{},\nthe winner of the official 2020 RBC rating competition.\nIn total, 34 programs competed to achieve the highest rating\nby playing public games.\nRatings were computed with \\textit{BayesElo} \\citep{coulom2008whr},\nand playing at least 100 games was required to be eligible to win.\nFigure~\\ref{fig:queen-risks} shows ground truth positions from the tournament in which \\texttt{Penumbra}{} voluntarily put its queen in danger.\nPlayers were paired randomly,\nbut the opponent's identity was provided at the start of each game\nwhich allowed catering strategies for specific opponents.\nHowever, opponents were free to change their strategies at any point,\nso attempting to exploit others could backfire.\nNonetheless, \\texttt{Penumbra}{} sought to model and counter predictable opponents\nrather than focusing on finding a Nash equilibrium.\n\n\n\\paragraph{Other RBC programs}\n\nRBC programs have employed a variety of\nalgorithms \\citep{pmlr-v123-gardner20a} including\nQ-learning \\citep{mnih2013playing},\ncounterfactual regret minimization (CFR) \\citep{zinkevich2008regret},\nonline outcome sampling (OOS) \\citep{lisy2015online},\nand the Stockfish chess engine \\citep{romstad2020stockfish}.\nAnother strong RBC program\n\\citep{highley2020, blowitski2021checkpoint}\nmaintains a probability distribution for each piece.\nMost RBC programs select sense actions and move actions in separate ways\nwhile DSMCP unifies all action selection.\n\\citet{savelyev2020mastering} also applied UCT to RBC and modeled the\nroot belief state with a distribution over 10,000 tracked positions.\nInput to a neural network consisted of the most-likely 100 positions,\nand storing a single training example required\n3.5MB on average,\nlarge enough to hinder training.\nThis work circumvented the same issue by\nrepresenting training examples with compact synopses which are\nless than 1kB.\n\n\n\\section{Terminology}\n\\label{sec:extensive-form-games}\n\nConsider the two-player extensive-form game with\nagents $\\mathcal{P} = \\{$self, opponent$\\}$, actions $\\mathcal{A}$,\n``ground truth'' world states{} $\\mathcal{X}{}$,\nand initial state $x_0 \\in \\mathcal{X}{}$.\nEach time an action is taken,\neach agent $p \\in \\mathcal{P}$ is given an observation\n$\\mathbf{o}_p \\in \\mathcal{O}$ that matches ($\\sim$) the possible world states{} from $p$'s perspective.\nFor simplicity, assume the game has deterministic actions\nsuch that each $a \\in \\mathcal{A}$ is a function $a : X{} \\rightarrow \\mathcal{X}{}$\ndefined on a subset of world states{} $X{} \\subset \\mathcal{X}{}$.\nDefine $\\mathcal{A}_x{}$ as the set of actions available from $x{} \\in \\mathcal{X}{}$.\n\nAn information state{} (infostate{}) $s \\in \\mathcal{S}$ for agent $p$\nconsists of all observations $p$ has received so far.\\footnote{\nAn infostate{} is equivalent to an information set,\nwhich is the set of all possible action histories from $p$'s perspective\n\\citep{osborne1994course}.\n}\nLet $\\mathcal{X}{}_s \\subset \\mathcal{X}{}$ be the set of all\nworld states{} that are possible from $p$'s perspective from $s$.\nIn general, $\\mathcal{X}_s$ contains less information than $s$\nsince some (sensing) actions may not affect the world state.\nDefine a collection of limited-size world state{} sets\n$\\mathcal{L} = \\{L \\subset \\mathcal{X}_s : s \\in \\mathcal{S}, |L| \\le \\ell\\}$,\ngiven a constant $\\ell$.\n\n\nLet\n$\\rho : \\mathcal{X}{} \\rightarrow \\mathcal{P}$ indicate the agent to act in each world state{}.\nAssume\nthat $\\mathcal{A}_x{} = \\mathcal{A}_y$ and $\\rho(x{}) = \\rho(y)$\nfor all $x{}, y \\in \\mathcal{X}{}_s$ and $s \\in \\mathcal{S}$.\nThen extend the definitions of\nactions available $\\mathcal{A}_*$ and agent to act $\\rho$\nover sets of world states{} and over infostates{} in the natural way.\nA policy $\\pi(a | s)$\nis a distribution over actions given an infostate{}.\nA belief state $B(h)$ is a distribution over action histories.\nCreating a belief state from an infostate{}\nrequires assumptions about the opponent's action policy $\\tau(a|s)$.\nLet $\\mathcal{R}_p : \\mathcal{X}{} \\rightarrow \\mathbb{R}$ map terminal states to the reward for player $p$.\nThen $(\\mathcal{S}, \\mathcal{A}, \\mathcal{R}_\\text{self}, \\tau, s_0)$\nis a POMDP, where\nthe opponent's policy $\\tau$ provides environment state transitions\nand $s_0$ is the initial infostate{}.\nIn the rest of this paper, the word ``state'' refers to a world state{}\nunless otherwise specified.\n\n\\section{Deep synoptic Monte Carlo planning}\n\\label{sec:dsmcp-algorithm}\n\n\\begin{figure}\n\\begin{minipage}{.5\\linewidth}\n\\begin{center}\n\\input{overview_figure.tex}\n\\end{center}\n\\caption{%\nDSMCP approximates infostates{}\nwith size-limited sets of possible states (circles).\nIt tracks all possible states $X_t$ for each turn from its own perspective\nand constructs belief states $\\hat{B}_t$ with\napproximate infostates{} from the opponent's perspective.\nAt the root of each playout{},\nthe initial approximate infostate{} for the opponent\nis sampled from $\\hat{B}_t$, and the initial approximate infostate{} for\nitself is a random subset of $X_t$.\n}\n\\label{fig:algorithm-figure}\n\\end{minipage}%\n\\begin{minipage}{.5\\linewidth}\n\\input{algorithms_one.tex}\n\\end{minipage}\n\\end{figure}\n\nEffective planning algorithms for imperfect information games\nmust model agents' choice of actions based on (belief states derived from)\ninfostates{}, not on world states{} themselves.\nDeep synoptic Monte Carlo planning (DSMCP) approximates infostates{}\nwith size-limited sets of possible world states{} in $\\mathcal{L}$.\nIt uses those approximations to construct a belief state and as UCT nodes\n\\citep{kocsis2006bandit}.\nFigure~\\ref{fig:algorithm-figure} provides a high-level\nvisualization of the algorithm.\n\n\\begin{figure}\n\\input{algorithms.tex}\n\\end{figure}\n\nA bandit algorithm chooses an action during each node visit,\nas described in Algorithm~\\ref{alg:stochastic-bandit}.\nThis bandit algorithm is similar to Smooth UCB \\citep{heinrich2015smoothuct}\nin that they both introduce stochasticity by mixing in a secondary policy.\nSmooth UCB empirically approached a Nash equilibrium\nutilizing the average policy according to action visits at each node.\nDSMCP mixes in a neural network's policy ($\\pi$) instead.\nThe constant $c$ controls the level of exploration,\nand $m$ controls how the policy $\\pi$ is mixed into the bandit algorithm.\nFor example, taking $m = 0$ always selects actions directly with $\\pi$\nwithout considering visit counts, and taking $m = \\infty$ never mixes in $\\pi$.\nAs in \\cite{silver2016mastering},\n$\\pi$ provides per-action exploration values which guide the tree search.\n\nApproximate belief states are constructed as subsets\n$\\hat{B} \\subset \\mathcal{L}$, where each $L \\in \\hat{B}$\nis a set of possible world-states from the opponent's perspective.\nThis ``second order'' representation of\nbelief states allows DSMCP to account for the opponent's uncertainty.\nInfostates{} sampled with rejection\n(Algorithm~\\ref{alg:prepare-sample})\nare used as the ``particles'' in a particle filter which models\nsuccessive belief states.\nSampling is guided by a neural network policy ($\\hat{\\tau}$)\nbased on the identity of the opponent.\nTo counter particle deprivation, if $k$ consecutive candidate samples\nare rejected as incompatible with the possible world states,\nthen a singleton sample consisting of a randomly-chosen possible\nstate is selected instead.\n\nThe tree search, described in Algorithm~\\ref{alg:choose-action},\ntracks an approximate infostate{} for each player while simulating playouts{}.\nPlayouts{} are also guided by\npolicy ($\\pi$ and $\\hat{\\tau}$) and value ($\\nu$) estimations from a neural network.\nA synopsis function $\\sigma$\ncreates a fixed-size summary of each node as input for the network.\nThe constant $b$ is the batch size for inference, $d$ is the search depth,\n$\\ell$ is the size of approximate infostate{}s,\n$n_{\\text{vl}}$ is the virtual loss weight,\nand $z$ is a threshold for increasing search depth.\n\nAlgorithm~\\ref{alg:play-game} describes how to play an entire game,\ntracking all possible world states{}.\nApproximate belief states ($\\hat{B}_t$) are constructed for each past turn\nby tracking $n_{\\text{particles}}$ elements of $\\mathcal{L}$\n(from the opponent's point of view) with an unweighted particle filter.\nEach time the agent receives a new observation,\nall of the (past) particles that are inconsistent with the observation\nare filtered out and replenished, starting with the oldest belief states.\n\n\n\\subsection{Synopsis}\n\n\\input{example_game.tex}\n\n\\begin{figure*}[ht]\n\\begin{center}\n\\input{bitboards_wide.tex}\n\\end{center}\n\\caption{\nThis set of synopsis bitboards was used as input to the neural network\nbefore White's sense on turn 5 of the game in Figure~\\ref{fig:sample-game}.\nThe synopsis contains 104 bitboards.\nEach bitboard encodes 64 binary features\nof the possible state set that the synopsis summarizes.\nFor example, bitboard \\#26 contains the possible locations of opposing pawns, and\nbitboard \\#27 contains the possible locations of opposing knights.\nAn attentive reader may notice that the black pawn on \\texttt{h4} is missing from bitboard \\#26,\nwhich is due to subsampling to $\\ell = 128$ states before computing the bitboards.\nIn this case, the true state was missing from the set of states used to create the synopsis.\nThe features in each synopsis are only approximations of the\ninfostates{} that they represent.\nThe first 10 bitboards are constants, which provide information that\nis difficult for convolutions to construct otherwise\n\\citep{liu2018coordconv}.\n}\n\\label{fig:sample-situation}\n\\end{figure*}\n\n\\begin{figure*}[ht]\n\\begin{center}\n\\input{architecture.tex}\n\\end{center}\n\\caption{\n\\texttt{Penumbra}{}'s network contains a shared tower with 10 residual blocks\nand 14 headsets.\nEach headset contains 5 heads for a total of 70 output heads.\nThe residual blocks are shown with a double border,\nand they each contain two $3 \\stimes 3$ convolutional layers and batch normalization.\nAll of the convolutional layers in the headsets are $1 \\stimes 1$ convolutions\nwith the exception of the one residual block for each policy head.\nEach headset was trained on a separate subset of the data, as described in\nTable~\\ref{tab:headset-descriptions}.\nThe policy head provides logits for both sense and move actions.\n}\n\\label{fig:architecture-diagram}\n\\end{figure*}\n\nOne of the contributions of this paper is the methodology\nused to approximate and encode infostates{}.\nGames that consist of a fixed number of turns, such as poker, admit a\nnaturally-compact infostate{} representation based on observations\n\\citep{heinrich2015smoothuct, heinrich2016deep}.\nHowever, perfect representations are not always practical.\nGame abstractions are often used to reduce\ncomputation and memory requirements.\nFor example, imperfect recall is an effective abstraction when past actions are\nunnecessary for understanding the present situation\n\\citep{waugh2009imperfectrecall, lanctot2012noregret}.\n\nDSMCP employs a stochastic abstraction\nwhich represents infostates{} with sets of world states{}\nand then subsamples to a manageable cardinality $(\\ell)$.\nFinally, a permutation-invariant synopsis function $\\sigma$ produces\nfixed-size summaries of the approximate infostates{}\nwhich are used for inference.\nAn alternative is to run inference on ``determinized'' world states{} individually\nand then somehow aggregate the results.\nHowever, such aggregation can easily lead to strategy fusion \\citep{frank1998finding}.\nOther alternatives include evaluating states with a recurrent network\n\\citep{rumelhart1986learning}\none-at-a-time or using a permutation-invariant architecture\n\\citep{zaheer2017deep, wagstaff2019limitations}.\n\nGiven functions $g_i : \\mathcal{X} \\rightarrow \\{0, 1\\}$\nfor $i = 0, \\dots, F$\nthat map states to binary features,\ndefine the $i^\\text{th}$ component of a synopsis function\n$\\sigma : \\mathcal{L} \\rightarrow \\{0, 1\\}^{F}$\nas\n\\begin{equation}\n \\sigma_i(X) =\n g_i(x_0) *_i g_i(x_1) \\ast_i \\dots *_i g_i(x_{\\ell})\n\\end{equation}\nwhere $X = \\{x_0, x_1, \\dots, x_\\ell\\}$\nand $*_i$ is either the logical \\texttt{AND} ($\\land$)\nor the logical \\texttt{OR} ($\\lor$) operation.\nFor example, if $g_i$ encodes whether \nan opposing knight can move to the \\texttt{d7} square of a chess board\nand $*_i = \\land$, then\n$\\sigma_i$ indicates that a knight can definitely move to \\texttt{d7}.\nFigure~\\ref{fig:sample-game} shows an example game, and\nFigure~\\ref{fig:sample-situation} shows an example\noutput of \\texttt{Penumbra}{}'s synopsis function,\nwhich consists of 104 bitboard feature planes each with 64 binary features.\nThe appendix describes each feature plane.\n\n\n\\subsection{Network architecture}\n\n\\texttt{Penumbra}{} uses a residual neural network\n\\citep{he2016resnet}\nas shown in Figure~\\ref{fig:architecture-diagram}.\nThe network contains 14 headsets,\ndesigned to model specific opponents and regularize each other\nas they are trained on different slices of data \\citep{zhang2020balance}.\nEach headset contains 5 heads:\na policy head, a value head, two heads for predicting\nwinning and losing within the next 5 actions,\nand a head for guessing the number of pieces of each type in the\nground truth world state{}.\nThe \\texttt{Top}{} policy head\nand the \\texttt{All}{} value head\nare used for planning as $\\pi$ and $\\nu$, respectively.\nThe other heads\n(including the \\texttt{SoonWin}, \\texttt{SoonLose}, and \\texttt{PieceCount} heads)\nprovide auxiliary tasks for further\nregularization \\citep{wu2020accelerating, fifty2020measuring}.\nWhile playing against an opponent that is ``recognized''\n(when a headset was trained on data from only that opponent),\nthe policy head ($\\hat{\\tau}$) of the corresponding headset is used\nfor the opponent's moves while progressing the particle filter\n(Algorithm~\\ref{alg:prepare-sample}) and\nwhile constructing the UCT tree (Algorithm~\\ref{alg:choose-action}).\nWhen the opponent is unrecognized, the \\texttt{Top}{} policy head is used by default.\n\n\\subsection{Training procedure}\n\\label{sec:training-procedure}\n\nThe network was trained on historical%\n\\footnote{%\nThe games were downloaded\nfrom \\url{rbmc.jhuapl.edu} in June, 2019 and \\url{rbc.jhuapl.edu} in August, 2020.\nAdditionally, 5,000 games were played locally by \\texttt{StockyInference}{}.\n} game data\nas described by Table~\\ref{tab:headset-descriptions}.\nThe reported accuracies are averages over 5 training runs.\nThe \\texttt{All}{} headset was trained on all games,\nthe \\texttt{Top}{} headset was trained on games from the highest-rated players,\nthe \\texttt{Human}{} headset was trained on all games played by humans,\nand each of the other 11 headsets were trained to mimic specific opponents.\n\n10\\% of the games were used as validation data based on game filename hashes.\nTraining examples were extracted from games multiple times since\nreducing possible state sets to $\\ell$ states is non-deterministic.\nA single step of vanilla stochastic gradient descent\nwas applied to one headset at a time,\nalternating between headsets according to their training weights.\nSee the appendix for hyperparameter settings and accuracy cross tables.\nTraining and evaluation were run on four RTX 2080 Ti GPUs.\n\n\\input{headset_summary.tex}\n\n\\subsection{Implementation details}\n\\label{sec:implementation-details}\n\n\\texttt{Penumbra}{} plays RBC with DSMCP along with\nRBC-specific extensions.\nFirst, sense actions that are dominated by other sense actions are pruned from consideration.\nSecond, \\texttt{Penumbra}{} can detect some forced wins in\nthe sense phase, the move phase, and during the opponent's turn.\nThis static analysis is applied at the root and to playouts{};\nplayouts{} are terminated as soon as a player could win,\navoiding unnecessary neural network inference.\nThe static analysis was also used to clean training games in which\nthe losing player had sufficient information to find a forced win.\n\nPiece placements are represented with bitboards \\citep{browne2014bitboard},\nand the tree of approximate infostates{}\nis implemented with a hash table.\nZobrist hashing \\citep{zobrist1990hashing}\nmaintains hashes of piece placements incrementally.\nHash table collisions are resolved by overwriting older entries.\nThe tree was not implemented until after the competition, so\nfixed-depth playouts{} were used instead ($m = 0$).\nInference is done in batches of $256$\nduring both training and online planning.\nThe time used per action is approximately proportional to the time remaining.\nThe program processes approximately 4,000 nodes per second, and it\nplays randomly when the number of possible states exceeds 9 million.\n\n\n\\section{Experiments}\n\\label{sec:experiments}\n\nThis section presents the results of playing games between \\texttt{Penumbra}{} and\nseveral publicly available RBC baselines\n\\citep{pmlr-v123-gardner20a, bernardoni2020baselines}.\nEach variant of \\texttt{Penumbra}{} in Table~\\ref{tab:baseline-bots} played 1000 games against each baseline, and each variant in \nTable~\\ref{tab:caution-and-paranoia}\nand Table~\\ref{tab:exploration} played 250 games against each baseline.\nGames with errors were ignored and replayed.\nThe Elo ratings and 95\\% confidence intervals\nwere computed with \\textit{BayesElo} \\citep{coulom2008whr}\nand are all compatible.\nThe scale was anchored with \\texttt{StockyInference}{} at 1500\nbased on its rating during the competition.\n\nTable~\\ref{tab:baseline-bots} gives ratings of the baselines and\nfive versions of \\texttt{Penumbra}{}.\n\\texttt{PenumbraCache}\nrelied solely on the network policy for action selection in playouts\n($m \\,{=}\\, 0$),\n\\texttt{PenumbraTree} built a UCT search tree\n($m \\,{=}\\, \\infty$), and\n\\texttt{PenumbraMixture}\nmixed in the network policy during early node visits\n($m \\,{=}\\, 1$).\nThe mixed strategy performed the best.\n\\texttt{PenumbraNetwork} selected actions\nbased on the network policy without performing any playouts.\n\\texttt{PenumbraSimple}\nis the same as \\texttt{PenumbraMixture}\nwith the static analysis described\nin Section~\\ref{sec:implementation-details} disabled.\n\\texttt{PenumbraNetwork} and \\texttt{PenumbraSimple} serve as ablation studies;\nremoving the search algorithm is detrimental while the effect of removing\nthe static analysis is not statistically significant.\nUnexpectedly, \\texttt{Penumbra}{} played the strongest against \\texttt{StockyInference}{}\nwhen that program was unrecognized.\nSo, in this case, modeling the opponent with a stronger policy\noutperformed modeling it more accurately.\n\nTwo algorithmic modifications that give the opponent\nan artificial advantage during planning were investigated.\nTable~\\ref{tab:caution-and-paranoia}\nreports the results of a grid search over\n``cautious'' and ``paranoid'' variants of DSMCP.\nThe caution parameter $\\kappa$ specifies the percentage\nof playouts{} that use $\\ell = 4$\nfor the opponent instead of the higher default limit.\nSince each approximate infostate{}\nis guaranteed to contain the correct ground truth in playouts{},\nreducing $\\ell$ for the opponent gives the opponent higher-quality information,\nallowing the opponent to counter risky play more easily in the constructed UCT tree.\n\nThe paranoia parameter augments the exploration values in\nAlgorithm~\\ref{alg:stochastic-bandit} to incorporate\nthe minimum value seen during the current playout{}.\nWith paranoia $\\phi$, actions are selected according to\n\\begin{equation}\n \\argmax_a \\left((1 - \\phi)\\frac{ \\vec{q}_a }{ \\vec{n}_a }\n + \\phi \\vec{m}_a\n + c \\pi_a \\sqrt{\\frac{\\ln{n}}{\\vec{n}_a}} \\right)\n\\end{equation}\nwhere $\\vec{m}$ contains the minimum value observed for each action.\nThis is akin to the notion of paranoia studied by\n\\citet{parker2006overconfidence, parker2010paranoia}.\n\n\\input{result_tables.tex}\n\nTable~\\ref{tab:exploration} shows the results of a grid search over\nexploration constants and two bandit algorithms.\nUCB1 \\citep{kuleshov2014algorithms}\n(with policy priors), which is used on the last line of\nAlgorithm~\\ref{alg:stochastic-bandit},\nis compared with ``a variant of PUCT'' (aVoP)\n\\citep{silver2016mastering, yu2019elf, lee2019minigo},\nanother popular bandit algorithm.\nThis experiment used $\\kappa = 20\\%$ and $\\phi = 20\\%$.\nFigure~\\ref{fig:uncertainty-graphs} show that\n\\texttt{Penumbra}{}'s value head accounts for\nthe uncertainty of the underlying infostate{}.\n\n\\begin{figure}[h]\n \\centering\n \\begin{minipage}{0.5\\linewidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{images\/uncertainty-train-plot-6.png}\n (a)\n \\end{minipage}%\n \\begin{minipage}{0.5\\linewidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{images\/uncertainty-test-plot-6.png}\n (b)\n \\end{minipage}\n\\caption{\nThe mean historical win percentage and the mean network value assigned to\n(a) train and (b) validation synopses tend to decrease as\nthe number of world states{} given to $\\sigma$ increases.\n}\n\\label{fig:uncertainty-graphs}\n\\end{figure}\n\n\n\\section{Per-bit saliency}\n\\label{sec:saliency}\n\nSaliency methods may be able to identify which of the\nsynopsis feature planes are most important and which are least important.\nGradients only provide local information, and some saliency methods\nfail basic sanity checks \\citep{adebayo2018sanity}.\nHigher quality saliency information may be surfaced by\nintegrating gradients over gradually-varied inputs\n\\citep{sundararajan2017axiomatic, kapishnikov2019xrai}\nand by smoothing gradients locally\n\\citep{smilkov2017smoothgrad}.\nThose saliency methods are not directly applicable to\ndiscrete inputs such as the synopses used in this work.\nSo, this paper introduces a saliency method that aggregates\ngradient information across two separate dimensions:\ntraining examples and iterations.\nPer-batch saliency (PBS) averages the absolute value of gradients\nover random batches of test examples throughout training.\nSimilarly, per-bit saliency (PbS) averages\nthe absolute value of gradients over bits (with specific values)\nwithin batches of test examples throughout training.\nGradients were taken both with respect to the loss\nand with respect to the action policy.\n\n\\begin{figure}[t]\n \\centering\n \\begin{minipage}{0.5\\linewidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{images\/dark-squares-loss-per-batch-saliency-17.png}\n (a)\n \\includegraphics[width=1\\linewidth]{images\/dark-squares-action-per-bit-saliency-17.png}\n (b)\n \\end{minipage}%\n \\begin{minipage}{0.5\\linewidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{images\/top-loss-per-batch-saliency-17.png}\n (c)\n \\includegraphics[width=1\\linewidth]{images\/top-action-per-bit-saliency-17.png}\n (d)\n \\end{minipage}\n\\caption{\n(a) The loss per-batch saliency (PBS) and\n(b) the action per-bit saliency (PbS)\nare taken on test examples during training.\nThese graphs show the saliency of\nfeature plane \\#8, dark squares, for each headset in one training run.\nThe large gradients with respect to the loss\nsuggest that the \\texttt{Genetic}{} headset has overfit.\n(c) The loss PBS and\n(d) the action PbS\nprovide insight about which synopsis features are most useful.\nThe top-five most-salient feature planes\nand the least-salient feature plane\nfor the \\texttt{Top}{} headset from one training run are shown.\n}\n\\label{fig:saliency-graphs}\n\\end{figure}\n\nFigure~\\ref{fig:saliency-graphs}\nshows saliency information for input synopsis features used by \\texttt{Penumbra}{}.\nIn order to validate that these saliency statistics are meaningful,\nthe model was retrained 104 times,\nonce with each feature removed \\citep{hooker2018benchmark}.\nHigher saliency is slightly correlated\nwith decreased performance when a feature is removed.\nThe correlation coefficient to the average change in accuracy is\n$-0.208$ for loss-PBS, and $-0.206$ for action-PbS.\nExplanations for the low correlation include\nnoise in the training process and\nthe presence of closely-related features.\nUltimately, the contribution of a feature during training is distinct from\nhow well the model can do without that feature.\nSince some features are near-duplicates of others,\nremoving one may simply increase dependence on another.\nStill, features with high saliency\n--- such as the current stage (sense or move) and the location of the last capture ---\nare likely to be the most important,\nand features with low saliency may be considered for removal.\nThe appendix includes saliency statistics for each feature plane.\n\n\n\\section{Discussion}\n\\label{sec:discussion}\n\n\\paragraph{Broader impact}\n\nDSMCP is more broadly applicable\nthan some prior algorithms for imperfect information games,\nwhich are intractable in settings with large infostates{}\nand small amounts of shared knowledge \\citep{brown2020combining}.\nRBC and the related game Kriegspiel were motivated by\nuncertainty in warfare \\citep{newman2016reconnaissance}.\nWhile playing board games is not dangerous in itself,\nalgorithms that account for uncertainty may become\neffective and consequential in the real world.\nIn particular, since it focuses on exploiting weaknesses of other agents,\nDSMCP could be applied in harmful ways.\n\n\\paragraph{Future work}\n\nPlanning in imperfect information games\nis an active area of research \\citep{russell2020artificial},\nand RBC is a promising testing ground for such research.\n\\texttt{Penumbra}{} would likely benefit from further hyperparameter tuning\nand potentially alternative corralled bandit algorithms \\citep{arora2020corralling}.\nModeling an opponent poorly could be catastrophic;\nalgorithmic adjustments may lead to more-robust best-response strategies\n\\citep{ponsen2011acm}.\nHow much is lost by collapsing infostates{} with synopses is unclear\nand deserves further investigation.\nFinally, the ``bitter lesson'' of machine learning \\citep{sutton2019bitter}\nsuggests that a learned synopsis function may perform better.\n\n\n\\section*{Acknowledgements}\n\nThanks to the Johns Hopkins University Applied Physics Laboratory\nfor inventing such an intriguing game and for hosting RBC competitions.\nThanks to Ryan Gardner for valuable correspondence.\nThanks to Rosanne Liu, Joel Veness, Marc Lanctot, Zhe Zhao, and Zach Nussbaum\nfor providing feedback on early drafts.\nThanks to William Bernardoni for open sourcing high-quality baseline bots.\nThanks to Solidmind for the song ``Penumbra'',\nwhich is an excellent soundtrack for programming.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAge-related facial technologies generally address the two areas of age estimation \\cite{Chen_FG2011, Luu_FG2011, Luu_BTAS2009, Luu_IJCB2011, Duong_ICASSP2011, Luu_ROBUST2008} and age progression \\cite{nhan2015beyond, patterson2007comparison, Zhang_2017_CVPR, Patterson2013, wang2018face_aging, Shu_2015_ICCV}. The face age-estimation problem is defined as building computer software that has the ability to recognize the ages of individuals in a given photograph. Comparatively, the face age-progression problem necessitates the more complex capability to predict the future facial likeness of people appearing in images \\cite{Luu_CAI2011}. Aside from the innate curiosity of individuals, research of face aging has its origins in cases of missing persons and wanted fugitives, in either case law enforcement desires plausible age-progressed images to facilitate searches. Accurate face aging also provides benefits for numerous practical applications such as age-invariant face recognition \\cite{Xu_IJCB2011, Xu_TIP2015, Le_JPR2015}. There have been numerous anthropological, forensic, computer-aided, and computer-automated approaches to facial age-progression. However, the results from previous methods for synthesizing aged faces that represent accurate physical processes involved in human aging are still far from perfect. This is especially so in age-progressing videos of faces, due to the usual challenges for face processing involving pose, illumination, and environment variation as well as differences between video frames.\n\n\nThere have been two key research directions in age progression for both conventional computer-vision approaches and recent deep-learning methods -- \\textit{one-shot synthesis} and \\textit{multiple-shot synthesis}. Both approaches have used facial image databases with longitudinal sample photos of individuals, where the techniques attempt to discover aging patterns demonstrated over individuals or the population represented. In one-shot synthesis approaches, a new face at the target age is directly synthesized via inferring the relationships between training images and their corresponding age labels then applying them to generate the aged likeness. These prototyping methods \\cite{burt1995perception, kemelmacher2014illumination,rowland1995manipulating} often classify training images in facial image databases into age groups according to labels. Then the average faces, or mean faces, are computed to represent the key presentation or archetype of their groups.\nThe variation between the input age and the target age archetypes is complimented to the input image to synthesize the age-progressed faces at the requested age.\nIn a similar way, Generative Adversarial Networks (GANs) \\cite{Zhang_2017_CVPR, wang2018face_aging} methods present the relationship between semantic representation of input faces and age labels by constructing a deep neural network generator. It is then combined with the target age labels to synthesize output results.\n\nMeanwhile, in multiple-shot synthesis, the longitudinal aging process is decomposed into multiple steps of aging effects \\cite{Duong_2017_ICCV,Duong_2016_CVPR, Shu_2015_ICCV, wang2016recurrent,yang2016face}. These methods build on the facial aging transformation between two consecutive age groups. Finally, the progressed faces from one age group to the next are synthesized step-by-step until they reach the target age. These methods can model the long-term sequence of face aging using this strategy. However, these methods still have drawbacks due to the limitations of long-term aging not being well represented nor balanced in face databases. \n\nExisting age-progression methods all similarly suffer from problems in both directions. Firstly, they only work on single input images. Supposing there is a need to synthesize aging faces presented in a captured video, these methods usually have to split the input video into separate frames and synthesize every face in each frame \\textit{independently} which may often present \\textit{inconsistencies} between synthesized faces. Since face images for each frame are synthesized separately, the aging patterns of generated faces of the same subject are also likely not coherent. Furthermore, most aging methods are unable to produce \\textit{high-resolution} images of age progression, important for features such as fine lines that develop fairly early in the aging process. This may be especially true in the latent based methods \\cite{kemelmacher2014illumination, Duong_2017_ICCV,Duong_2016_CVPR, Shu_2015_ICCV, wang2016recurrent,yang2016face}.\n\n\\paragraph{Contributions of this work:} \nThis paper presents a deep Reinforcement Learning (RL) approach to Video Age Progression to guarantee the consistency of aging patterns in synthesized faces captured in videos. In this approach, the age-transformation embedding is modeled as the optimal selection using Convolutional Neural Network (CNN) features under a RL framework. Rather than applying the image-based age progression to each video frame independently as in previous methods, the proposed approach has the capability of exploiting the temporal relationship between two consecutive frames of the video. This property facilitates maintaining consistency of aging information embedded into each frame.\nIn the proposed structure, not only can a \\textit{smoother synthesis} be produced across frames in videos, but also the \\textit{visual fidelity} of aging data, i.e. all images of a subject in different or the same age, is preserved for better age transformations. To the best of our knowledge, our framework is one of the first face aging approaches in videos.\nFinally, this work contributes a new large-scale face-aging database\\footnote{\\url{https:\/\/face-aging.github.io\/RL-VAP\/}} to support future studies related to automated face age-progression and age estimation in both images and videos.\n\\section{Related work}\n\n\n\\begin{table*}[!t] \n\t\\small \n\t\\centering\n\t\\caption{The properties of our collected AGFW-v2 in comparison with other aging databases. For AGFW-v2 video set, the images of the subjects in old age are also collected for reference in terms of subject's appearance changing.}\n\t\\label{tab:AgingDatabaseProperties}\n\t\\begin{tabular}{l c c c c c c }\n\t\t\\Xhline{2\\arrayrulewidth}\n\t\t\\textbf{Database} & \\textbf{\\# Images} & \\textbf{\\# Subjects} & \\textbf{Label type} & \\textbf{Image type} & \\textbf{Subject type} & \\textbf{Type}\\\\ \n\t\t\\Xhline{2\\arrayrulewidth}\n\t\tMORPH - Album 1 \\cite{ricanek2006morph} & 1,690 & 628 & Years old & Mugshot & Non-famous & Image DB\\\\\t\t\t\t\n\t\tMORPH - Album 2 \\cite{ricanek2006morph} & 55,134 & 13,000 & Years old & Mugshot & Non-famous & Image DB\\\\\n\t\t\\hline\n\t\tFG-NET \\cite{fgNetData} & 1,002 & 82 & Years old & In-the-wild & Non-famous & Image DB\\\\\n\t\tAdienceFaces \\cite{levi2015age} & 26,580 & 2,984 & Age groups & In-the-wild & Non-famous & Image DB\\\\\n\t\tCACD \\cite{chen14cross} & 163,446 & 2,000 & Years old & In-the-wild & Celebrities & Image DB\\\\\n\t\tIMDB-WIKI \\cite{Rothe-IJCV-2016} & 52,3051 & 20,284 & Years old & In-the-wild & Celebrities & Image DB\\\\\n \n AgeDB \\cite{AgeDB} & 16,488 & 568 & Years old & In-the-wild & Celebrities & Image DB\\\\\n AGFW \\cite{Duong_2016_CVPR} & 18,685 & 14,185 & Age groups & In-the-wild\/Mugshot & Non-famous & Image DB\\\\\n \\hline\n \\textbf{AGFW-v2 (Image)} & \\textbf{36,299} & \\textbf{27,688} & \\textbf{Age groups} & \\textbf{In-the-wild\/Mugshot} & \\textbf{Non-famous} & \\textbf{Image DB}\\\\\n \\textbf{AGFW-v2 (Video)} & \\textbf{20,000} & \\textbf{100} & \\textbf{Years old} & \\textbf{Interview\/Movie-style} & \\textbf{Celebrities} & \\textbf{Video DB}\\\\\n\t\t\\hline\n\t\t\n\t\\end{tabular}\n\t\\vspace{-4mm}\n\\end{table*}\n\nThis section provides an overview of recent approaches for age progression; \\textit{these methods primarily use still images}. The approaches generally fall into one of four groups, i.e. modeling, reconstruction, prototyping, and deep learning-based approaches.\n\n\\textit{Modeling-based} approaches aim at modeling both shape and texture of facial images using parameterization method, then learning to change these parameters via an aging function. \nActive Appearance Models (AAMs) have been used with four aging functions in \\cite{lanitis2002toward,patterson2006automatic} to model linearly both the general and the specific aging processes. Familial facial cues were combined with AAM-based techniques in \\cite{luu2009Automatic, patterson2007comparison}. \\cite{Patterson2013} incorporated an AAM reconstruction method to the synthesis process for a higher photographic fidelity of aging. An AGing pattErn Subspace (AGES) \\cite{geng2007automatic} was proposed to construct a subspace for aging patterns as a chronological sequence of face images. \nIn \\cite{tsai2014human}, AGES was enhanced with guidance faces consisting the subject's characteristics for more stable results. \nThree-layer And-Or Graph (AOG) \\cite{suo2010compositional, suo2012concatenational} was used to model a face as a combination of smaller parts, i.e. eyes, nose, mouth, etc. \nThen a Markov chain was employed to learn the aging process for each part. \n\nIn \\textit{reconstruction-based} approaches, an aging basis is unified in each group to model aging faces. Person-specific and age-specific factors were independently represented by sparse-representation hidden factor analysis (HFA) \\cite{yang2016face}. \nAging dictionaries (CDL) \\cite{Shu_2015_ICCV} were proposed to model personalized aging patterns by attempting to preserve distinct facial features of an individual through the aging process.\n\n\n\\textit{Prototyping-based} approaches employed proto-typical facial images in a method to synthesize faces. The average face of each age group is used as the representative image for that group, and these are named the ``age prototypes'' \\cite{rowland1995manipulating}. Then, by computing the differences between the prototypes of two age groups, an input face can be progressed to the target age through image-based manipulation \\cite{burt1995perception}. In \\cite{kemelmacher2014illumination}, high quality average prototypes constructed from a large-scale dataset were employed in conjunction with the subspace alignment and illumination normalization.\n\nRecently, \\textit{Deep learning-based approaches} have yielded promising results in facial age progression. \nTemporal and Spatial Restricted Boltzmann Machines (TRBM) were introduced in \\cite{Duong_2016_CVPR} to represent the non-linear aging process, with geometry constraints, and to model a sequence of reference faces as well as wrinkles of adult faces. A Recurrent Neural Network (RNN) with two-layer Gated Recurrent Unit (GRU) was employed to approximate aging sequences \\cite{wang2016recurrent}. \nAlso, the structure of Conditional Adversarial Autoencoder (CAAE) was applied to synthesize aged images in \\cite{antipov2017face}. Identity-Preserved Conditional Generative Adversarial Networks (IPCGANs) \\cite{wang2018face_aging} brought the structure of Conditional GANs with perceptual loss into place for synthesis process. A novel generative probabilistic model, called Temporal Non-Volume Preserving (TNVP) transformation \\cite{Duong_2017_ICCV} was proposed to model a long-term facial aging as a sequence of short-term stages. \n\n\\begin{figure*}[t]\n\t\\centering \\includegraphics[width=1.5\\columnwidth]{Aging_RL_framework.jpg}\n\t\\caption{The structure of the face aging framework in video. \\textbf{Best viewed in color and 2$\\times$ zoom in.}}\t\n\t\\label{fig:RL_Framework}\n\\end{figure*}\n\n\n\\section{Data Collection} \\label{sec:dbcollec}\nThe quality of age representation in a face database is one of the most important features affecting the aging learning process and could include such considerations as the number of longitudinal face-image samples per subject, the number of subjects, the range and distribution of age samples overall, and the population representation presented in the database. \nPrevious public databases used for age estimation or progression systems have been very limited in the total number of images, the number of images per subject, or the longitudinal separation of the samples of subjects in the database, i.e. FG-NET \\cite{fgNetData}, MORPH \\cite{ricanek2006morph}, AgeDB \\cite{AgeDB}. Some recent ones may be of larger scale but have noise within the age labels, i.e. CACD \\cite{chen14cross}, IMDB-WIKI \\cite{Rothe-IJCV-2016}. \nIn this work we introduce an extension of Aging Faces in the Wild (AGFW-v2) in terms of both \\textit{image and video} collections.\nTable \\ref{tab:AgingDatabaseProperties} presents the properties of our collected AGFW-v2 in comparison with others.\n\n\n\\subsection{Image dataset}\nAGFW \\cite{Duong_2016_CVPR} was first introduced with 18,685 images with individual ages sampled ranging from 10 to 64 years old. Based on the collection criteria of AGFW, a double-sized database was desired. Compared to other age-related databases, \\textit{most of the subjects in AGFW-v2 are not public figures and less likely to have significant make-up or facial modifications}, helping embed accurate aging effects during the learning process.\nIn particular, AGFW-v2 is mainly collected from three sources. Firstly, we adopt a search engine using different keywords, e.g. male at 20 years old, etc. Most images come from the daily life of non-famous subjects. Besides the images, all publicly available meta-data related to the subject's age are also collected. \nThe second part comes from mugshot images that are accessible from public domain. These are passport-style photos with \nages reported by service agencies. Finally, we also include the Productive Aging Laboratory (PAL) database \\cite{PALDB}.\nIn total, AGFW-v2 consists of 36,299 images divided into 11 age groups with a span of five years.\n\\noindent\n\\subsection{Video dataset}\nAlong with still photographs, we also collected a video dataset for temporal aging evaluations with 100 videos of celebrities. Each video clip consists of 200 frames.\nIn particular, searching based on the individuals' names during collection efforts, their interview, presentation, or movie sessions were selected such that only one face, in a clear manner, is presented in the frame.\nAge annotations were estimated using the year of the interview session versus the year of birth of the individual. Furthermore, in order to provide a reference for subject's appearance in old age, the face images of these individuals at the current age are also collected and provided as meta-data for the subjects' videos. \n\n\\section{Video-based Facial Aging}\n\nIn the simplest approach, age progression of a sequence may be achieved by independently employing image-based aging techniques on each frame of a video. However, treating single frames independently may result in inconsistency of the final aged-progressed likeness in the video, i.e. some synthesized features such as wrinkles appear differently across consecutive video frames as illustrated in Fig. \\ref{fig:FrameVsVideo_Mark}. \nTherefore, rather than considering a video as a set of independent frames, this method exploits the temporal relationship between frames of the input video to maintain visually cohesive age information for each frame.\nThe aging algorithm is formulated as the sequential decision-making process from a goal-oriented agent while interacting with the temporal visual environment. At time sample, the agent integrates related information of the current and previous frames then modifies action accordingly. The agent receives a scalar reward at each time-step with the goal of maximizing the total long-term aggregate of rewards, emphasizing effective utilization of temporal observations in computing the aging transformation employed on the current frame.\n\nFormally, given an input video, let $\\mathcal{I} \\in \\mathbb{R}^d$ be the image domain and $\\mathbf{X}^t = \\{\\mathbf{x}_y^t,\\mathbf{x}_o^t\\}$ be an image pair at time-step $t$ consisting of the $t$-th frame $\\mathbf{x}_y^t \\in \\mathcal{I}$ of the video at young age and the synthesized face $\\mathbf{x}_o^t \\in \\mathcal{I}$ at old age.\nThe goal is to learn a synthesis function $\\mathcal{G}$ that maps $\\mathbf{x}_y^t$ to $\\mathbf{x}_o^t$ as.\n\\begin{equation}\n\\footnotesize\n\\begin{split}\n\\mathbf{x}_o^t = \\mathcal{G}(\\mathbf{x}_y^t) | \\mathbf{X}^{1:t-1}\n\\end{split}\n\\label{eqn:mapping1}\n\\end{equation}\nThe conditional term indicates the temporal constraint needs to be considered during the synthesis process. \nTo learn $\\mathcal{G}$ effectively, we decompose $\\mathcal{G}$ into sub-functions as.\n\\begin{equation}\n\\footnotesize\n\\begin{split}\n \\mathcal{G} = \\mathcal{F}_1 \\circ \\mathcal{M} \\circ \\mathcal{F}_2\n \\end{split} \n\\end{equation}\nwhere $\\mathcal{F}_1: \\mathbf{x}_y^t \\mapsto \\mathcal{F}_1(\\mathbf{x}_y^t)$ maps the young face image $\\mathbf{x}_y^t$ to its representation in feature domain; $\\mathcal{M}: (\\mathcal{F}_1(\\mathbf{x}_y^t);\\mathbf{X}^{1:t-1}) \\mapsto \\mathcal{F}_1(\\mathbf{x}_o^t)$ defines the traversing function in feature domain; and $\\mathcal{F}_2: \\mathcal{F}_1(\\mathbf{x}_o^t) \\mapsto \\mathbf{x}_o^t$ is the mapping from feature domain back to image domain.\n\nBased on this decomposition, the architecture of our proposed framework (see Fig. \\ref{fig:RL_Framework}) consists of three main processing steps: (1) Feature embedding; (2) Manifold traversal; and (3) Synthesizing final images from updated features.\nIn the second step, a Deep RL based framework is proposed to guarantee the consistency between video frames in terms of aging changes during synthesis process.\n\n\\subsection{Feature Embedding} \\label{sec:FeatEmbbed}\nThe first step of our framework is to learn an embedding function $\\mathcal{F}_1$ to map $\\mathbf{x}_y^t$ into its latent representation $\\mathcal{F}_1(\\mathbf{x}_y^t)$. Although there could be various choices for $\\mathcal{F}_1$, to produce high quality synthesized images in later steps, the chosen structure for $\\mathcal{F}_1$ should produce a feature representation with two main properties: (1) \\textit{linearly separable} and (2) \\textit{detail preserving}. On one hand, with the former property, transforming the facial likeness from one age group to another age group can be represented as the problem of linearly traversing along the direction of a single vector in feature domain. On the other hand, the latter property guarantees a certain detail to be preserved and produce high quality results. In our framework, CNN structure is used for $\\mathcal{F}_1$. \nIt is worth noting that there remain some compromises regarding the choice of deep layers used for the representation such that both properties are satisfied. \\textit{Linear separability} is preferred in deeper layers further along the linearization process while \\textit{details of a face} are usually embedded in more shallow layers \\cite{mahendran2015understanding}.\nAs an effective choice in several image-modification tasks \\cite{gatys2015texture, gatys2015neural}, we adopt the normalized VGG-19\\footnote{This network is trained on ImageNet for better latent space.} and use the concatenation of three layers $\\{conv3\\_1, conv4\\_1, conv5\\_1\\}$ as the feature embedding. \n\n\\begin{figure*}[t]\n\t\\centering \n\t\\includegraphics[width=1.5\\columnwidth]{State_Action_RL_Aging_new_small.jpg}\n\t\\caption{The process of selecting neighbors for age-transformation relationship. \\textbf{Best viewed in color and 2$\\times$ zoom in.}}\t\n\t\\label{fig:RL_policy_net}\n\\end{figure*}\n\n\\subsection{Manifold Traversing}\nGiven the embedding $\\mathcal{F}_1(\\mathbf{x}_y^t)$, the age progression process can be interpreted as the linear traversal from the younger age region of $\\mathcal{F}_1(\\mathbf{x}_y^t)$ toward the older age region of $\\mathcal{F}_1(\\mathbf{x}_o^t)$ within the deep-feature domain. Then the Manifold Traversing function $\\mathcal{M}$ can be written as in Eqn \\eqref{eqn:traversing}.\n\\begin{equation}\n\\footnotesize\n\\begin{split}\n \\mathcal{F}_1(\\mathbf{x}_o^t) & = \\mathcal{M}(\\mathcal{F}_1(\\mathbf{x}_y^t); \\mathbf{X}^{1:t-1}) \\\\\n& = \\mathcal{F}_1(\\mathbf{x}_y^t) + \\alpha \\Delta^{\\mathbf{x}^t|\\mathbf{X}^{1:t-1}}\n\\end{split}\n\\label{eqn:traversing}\n\\end{equation}\nwhere $\\alpha$ denotes the user-defined combination factor, and $\\Delta^{\\mathbf{x}^t|\\mathbf{X}^{1:t-1}}$ encodes the amount of aging information needed to reach the older age region for the frame $\\mathbf{x}_y^t$ conditional on the information of previous frames. \n\n\\subsubsection{Learning from Neighbors} \\label{sec:LearnFromNeighbor}\nIn order to compute $\\Delta^{\\mathbf{x}^t|\\mathbf{X}^{1:t-1}}$ containing only aging effects without the presence of other factors, i.e. identity, pose, etc., we exploit the relationship in terms of the aging changes between the nearest neighbors of $\\mathbf{x}_y^t$ in the two age groups. In particular, given $\\mathbf{x}_y^t$, we construct two neighbor sets $\\mathcal{N}_y^t$ and $\\mathcal{N}_o^t$ that contain $K$ nearest neighbors of $\\mathbf{x}_y^t$ in the young and old age groups, respectively.\nThen \n\\small\n$\\Delta^{\\mathbf{x}^t|\\mathbf{X}^{1:t-1}}= \\Delta^{\\mathbf{x}^t|\\mathbf{X}^{1:t-1}}_{\\mathcal{A}(\\cdot, \\mathbf{x}_y^t)}$ \n\\normalsize\nis estimated by:\n\n\\begin{equation} \\label{eqn:delta_f} \\nonumber\n\\footnotesize\n\\begin{split}\n\\Delta^{\\mathbf{x}^t|\\mathbf{X}^{1:t-1}} \n& =\\frac{1}{K} \\left[\\sum_{\\mathbf{x} \\in \\mathcal{N}_o^t} \\mathcal{F}_1(\\mathcal{A}(\\mathbf{x},\\mathbf{x}_y^t))- \\sum_{\\mathbf{x} \\in \\mathcal{N}_y^t} \\mathcal{F}_1(\\mathcal{A}(\\mathbf{x},\\mathbf{x}_y^t)) \\right]\n\\end{split}\n\\end{equation}\n\\normalsize\nwhere $\\mathcal{A}(\\mathbf{x},\\mathbf{x}_y^t)$ denotes a face-alignment operator that positions the face in $\\mathbf{x}$ with respect to the face location in $\\mathbf{x}_y^t$. \nSince only the nearest neighbors of $\\mathbf{x}_y^t$ are considered in the two sets, conditions apart from age difference should be sufficiently similar between the two sets and subtracted away in $\\Delta^{\\mathbf{x}^t|\\mathbf{X}^{1:t-1}}$. Moreover, the averaging operator also helps to ignore identity-related factors, and, therefore, emphasizing age-related changes as the main source of difference to be encoded in $\\Delta^{\\mathbf{x}^t|\\mathbf{X}^{1:t-1}}$. The remaining question is how to choose the appropriate neighbor sets such that the aging changes provided by $\\Delta^{\\mathbf{x}^t|\\mathbf{X}^{1:t-1}}$ and $\\Delta^{\\mathbf{x}^{t-1}|\\mathbf{X}^{1:t-2}}$ are consistent. In the next section, a Deep RL based framework is proposed for selecting appropriate candidates for these sets.\n\n\\subsubsection{Deep RL for Neighbor Selection}\nA straightforward technique of choosing the neighbor sets for $\\mathbf{x}_y^t$ in young and old age is to select faces that are close to $\\mathbf{x}_y^t$ based on some \\textit{closeness criteria} such as distance in feature domain, or number of matched attributes. However, since these criteria are not frame-interdependent, they are unable to maintain visually cohesive age information across video frames. \nTherefore, we propose to exploit the relationship presented in the image pair $\\{\\mathbf{x}_y^t, \\mathbf{x}_y^{t-1}\\}$ and the neighbor sets of $\\mathbf{x}_y^{t-1}$ as an additional guidance for the selection process. Then an RL based framework is proposed and formulated as a sequential decision-making process with the goal of maximizing the temporal reward estimated by the consistency between the neighbor sets of $\\mathbf{x}_y^t$ and $\\mathbf{x}_y^{t-1}$.\n\nSpecifically, given two input frames $\\{\\mathbf{x}_y^t,\\mathbf{x}_y^{t-1}\\}$ and two neighbor sets $\\{\\mathcal{N}_y^{t-1}, \\mathcal{N}_o^{t-1}\\}$ of $\\mathbf{x}_y^{t-1}$, the agent of a policy network will iteratively analyze the role of each neighbor of $\\mathbf{x}_y^{t-1}$ in both young and old age in combination with the relationship between $\\mathcal{F}_1 (\\mathbf{x}_y^t)$ and $\\mathcal{F}_1 (\\mathbf{x}_y^{t-1})$ to determine new suitable neighbors for $\\{\\mathcal{N}_y^{t}, \\mathcal{N}_o^{t}\\}$ of $\\mathbf{x}_y^{t}$.\nA new neighbor is considered appropriate when it is sufficiently similar to $\\mathbf{x}_y^{t}$ and maintains aging consistency between two frames.\nEach time a new neighbor is selected, the neighbor sets of $\\mathbf{x}_y^t$ are updated and received a reward based on estimating the similarity of embedded aging information between two frames.\nAs a result, the agent can iteratively explore an optimal route for selecting neighbors to maximize the long-term reward. Fig. \\ref{fig:RL_policy_net} illustrates the process of selecting neighbors for age-transformation relationship.\n\n\\textbf{State:} The state at $i$-th step $\\mathbf{s}^t_i=\\left[\\mathbf{x}_y^{t}, \\mathbf{x}_y^{t-1}, \\mathbf{z}^{t-1}_i, (\\mathcal{N}^t)_i, \\mathcal{\\bar{N}}^t, \\mathbf{M}_i\\right]$ is defined as a composition of six components: (1) the \\textit{current frame} $\\mathbf{x}_y^t$; (2) the \\textit{previous frame} $\\mathbf{x}_y^{t-1}$; (3) the \\textit{current considered neighbor} $\\mathbf{z}^{t-1}_i$ of $\\mathbf{x}_y^{t-1}$, i.e. either in young and old age groups; (4) the \\textit{current construction of the two neighbor sets} $(\\mathcal{N}^t)_i = \\{(\\mathcal{N}_y^t)_i,(\\mathcal{N}_o^t)_i\\}$ of $\\mathbf{x}_y^{t}$ until step $i$; (5) the \\textit{extended neighbor sets} $\\mathcal{\\bar{N}}^t=\\{\\mathcal{\\bar{N}}_y^t,\\mathcal{\\bar{N}}_o^t\\}$ consisting of $N$ neighbors, i.e. $N > K$, of $\\mathbf{x}_y^{t}$ for each age group.\nand (6) a \\textit{binary mask} $\\mathbf{M}_i$ indicating which samples in $\\mathcal{\\bar{N}}^t$ are already chosen in previous steps. \nNotice that in the initial state $\\mathbf{s}^t_0$, the two neighbor sets $\\{(\\mathcal{N}_y^t)_0, (\\mathcal{N}_o^t)_0\\}$ are initialized using the $K$ nearest neighbors of $\\mathbf{x}_y^t$ of the two age groups, respectively. \nTwo measurement criteria are considered for finding the nearest neighbors: \\textit{the number of matched facial attributes}, e.g gender, expressions, etc.; and \\textit{the cosine distance between two feature embedding vectors}.\nAll values of the mask $\\mathbf{M}_i$ are set to 1 in $\\mathbf{s}^t_0$.\n\n\\textbf{Action:} Using the information from the chosen neighbor $\\mathbf{z}^{t-1}_i$ of $\\mathbf{x}_y^{t-1}$, and the relationship of $\\{\\mathbf{x}_y^{t}, \\mathbf{x}_y^{t-1}\\}$, an action $a_{i}^{t}$ is defined as selecting the new neighbor for the current frame such that with this new sample added to the neighbor sets of the current frame, the aging-synthesis features between $\\mathbf{x}_y^{t}$ and $\\mathbf{x}_y^{t-1}$ are more consistent. Notice that since not all samples in the database are sufficiently similar to $\\mathbf{x}_y^{t}$, we restrict the action space by selecting among $N$ nearest neighbors of $\\mathbf{x}_y^{t}$. In our configuration, $N= n * K$ where $n$ and $K$ are set to 4 and 100, respectively.\n\n\\textbf{Policy Network:} \nAt each time step $i$, the policy network first encodes the information provided in state $\\mathbf{s}^t_i$ as\n\\begin{equation}\n\\footnotesize\n\\begin{split}\n\\mathbf{u}^t_i &= \\left[\\delta^{\\text{pool5}}_{\\mathcal{F}_1}(\\mathbf{x}_y^t, \\mathbf{x}_y^{t-1}), \\mathcal{F}^{\\text{pool5}}_1(\\mathbf{z}_i^{t-1})\\right] \\\\\n\\mathbf{v}^t_i &=\\left[d\\left((\\mathcal{N}^t)_i,\\mathbf{x}_y^t\\right), d\\left(\\mathcal{\\bar{N}}^t,\\mathbf{x}_y^t\\right), \\mathbf{M}_i\\right]\n\\end{split}\n\\end{equation}\nwhere $\\mathcal{F}^{\\text{pool5}}_1$ is the embedding function as presented in Sec. \\ref{sec:FeatEmbbed}, but the $pool5$ layer is used as the representation; $\\delta^{\\text{pool5}}_{\\mathcal{F}_1}(\\mathbf{x}_y^t, \\mathbf{x}_y^{t-1}) = \\mathcal{F}^{\\text{pool5}}_1(\\mathbf{x}_y^t)-\\mathcal{F}^{\\text{pool5}}_1(\\mathbf{x}_y^{t-1})$ embeds the relationship of $\\mathbf{x}_y^{t}$ and $\\mathbf{x}_y^{t-1}$ in the feature domain. $d\\left((\\mathcal{N}^t)_i,\\mathbf{x}_y^t\\right)$ is the operator that maps all samples in $(\\mathcal{N}^t)_i$ to their representation in the form of cosine distance to $\\mathbf{x}_y^t$.\nThe last layer of the policy network is reformulated as $P(\\mathbf{z}^t_i = \\mathbf{x}_j|\\mathbf{s}_{i}^t) = e^{c_i^j} \/ {\\sum_{k} c_i^k}$,\nwhere \n\\small\n$\\mathbf{c}_i = \\mathbf{M}_i \\odot \\left(\\mathbf{W} \\mathbf{h}_i^t + \\mathbf{b}\\right)$\n\\normalsize\nand \n\\small\n$\\mathbf{h}_i^t=\\mathcal{F}_{\\pi}\\left(\\mathbf{u}_i^t,\\mathbf{v}^t_i. \\theta_{\\pi}\\right)$\n\\normalsize\n; $\\{\\mathbf{W},\\mathbf{b}\\}$ are weight and bias of the hidden-to-output connections.\nSince $\\mathbf{h}_i^t$ consists of the features of the sample picked for neighbors of $\\mathbf{x}_y^{t-1}$ and the temporal relationship between $\\mathbf{x}_y^{t-1}$ and $\\mathbf{x}_y^{t}$, it directly encodes the information of \\textit{how the face changes} and \\textit{what aging information} from the previous frame has been used.\nThis process helps the agent evaluate its choice to confirm the optimal candidate of $\\mathbf{x}_y^t$\nto construct the neighbor sets.\n\nThe output of the policy network is an $N+1$-dimension vector $\\mathbf{p}$ indicating the probabilities of all available actions $P(\\mathbf{z}^t_{i}=\\mathbf{x}_j|\\mathbf{s}_{i}^t),j=1..N$ where each entry indicates the probability of selecting sample $\\mathbf{x}_j$ for step $i$. It is noticed that the $N+1$-th value of $\\mathbf{p}$ indicates an action that there is no need to update the neighbor sets in this step.\nDuring training, an action $a_{i}^{t}$ is taken by stochastically sampling from this probability distribution. During testing, the one with highest probability is chosen for synthesizing process.\n\n\n\\textbf{State transition:} After decision of action $a_{i}^t$ in state $\\mathbf{s}_{i}^t$ has been made, the next state $\\mathbf{s}_{i+1}^t$ can be obtained via the state-transition function $\\mathbf{s}^t_{i+1} = Transition(\\mathbf{s}^t_{i}, a_i^t)$ where $\\mathbf{z}^{t-1}_i$ is updated to the next unconsidered sample $\\mathbf{z}^{t-1}_{i+1}$ in neighbor sets of $\\mathbf{x}_y^{t-1}$. Then the neighbor that is least similar to $\\mathbf{x}_y^{t}$ in the corresponding sets of $\\mathbf{z}^{t-1}_i$ is replaced by $\\mathbf{x}_j$ according to the action $a_{i}^t$.\nThe \\textit{terminate state} is reached when all the samples of $\\mathcal{N}_y^{t-1}, \\mathcal{N}_o^{t-1}$ are considered.\n\n\n\\textbf{Reward:}\nDuring training, the agent will receive a reward signal $r^t_i$ from the environment after executing an action $a_{i}^t$ at step $i$. In our proposed framework, the reward is chosen to measure aging consistency between video frames as.\n\n\\begin{equation} \\label{eqn:reward}\n\\footnotesize\nr^t_i = \\frac{1}{\\parallel \n\t\\Delta^{\\mathbf{x}^t|\\mathbf{X}^{1:t-1}}_{i,\\mathcal{A}(\\cdot, \\mathbf{x}_y^t)} - \\Delta^{\\mathbf{x}^{t-1}|\\mathbf{X}^{1:t-2}}_{\\mathcal{A}(\\cdot, \\mathbf{x}_y^t)} \\parallel + \\epsilon}\n\\end{equation}\nNotice that in this formulation, we align all neighbors of both previous and current frames to $\\mathbf{x}^t_y$. Since the same alignment operator $\\mathcal{A}(\\cdot,\\mathbf{x}_y^{t})$ on all neighbor sets of both previous and current frames is used, the effect of alignment factors, i.e. poses, expressions, location of the faces, etc., can be minimized in $r^t_i$. Therefore, $r^t_i$ reflects only the difference in aging information embedded into $\\mathbf{x}_y^{t}$ and $\\mathbf{x}_y^{t-1}$.\n\n\\textbf{Model Learning:} The training objective is to maximize the sum of the reward signals: $R = \\sum_i r^t_i$. We optimize the recurrent policy network with the REINFORCE algorithm \\cite{Williams92simplestatistical} guided by the reward given at each time step. \n\n\\subsection{Synthesizing from Features}\nAfter the neighbor sets of $\\mathbf{x}_y^t$ are selected, the $\\Delta^{\\mathbf{x}^t|\\mathbf{X}^{1:t-1}}$ can be computed as presented in Sec. \\ref{sec:LearnFromNeighbor} and the embedding of $\\mathbf{x}_y^t$ in old age region $\\mathcal{F}_1(\\mathbf{x}_o^t)$ is estimated via Eqn. \\eqref{eqn:traversing}. In the final stage, $\\mathcal{F}_1(\\mathbf{x}_o^t)$ can then be mapped back into the image domain $\\mathcal{I}$ via $\\mathcal{F}_2$ which can be achieved by the optimization shown in Eqn. \\eqref{eqn:tv_update} \\cite{mahendran2015understanding}.\n\\begin{equation} \\label{eqn:tv_update}\n\\small\n\\mathbf{x}^{t*}_o = \\arg \\min_{\\mathbf{x}} \\frac{1}{2} \\parallel \\mathcal{F}_1(\\mathbf{x}_o^t) - \\mathcal{F}_1(\\mathbf{x}) \\parallel^2_2 + \\lambda_{V^\\beta} R_{V^\\beta}(\\mathbf{x})\n\\end{equation}\nwhere $R_{V^\\beta}$ represents the Total Variation regularizer encouraging smooth transitions between pixel values.\n\n\\begin{figure}[t]\n\t\\centering \\includegraphics[width=0.9\\columnwidth]{Fig_AgingResults.jpg}\n\t\\caption{\\textbf{Age Progression Results.} For each subject, the two rows shows the input frames at the young age, and the age-progressed faces at 60-years old, respectively.} \n\t\\label{fig:Video_AP_frontal}\n\\end{figure}\n\n\\section{Experimental Results}\n\n\\subsection{Databases} \\label{subsec:db}\nThe proposed approach is trained and evaluated using training and testing databases that are not overlapped. Particularly, the neighbor sets are constructed using a large-scale database composing face images from our collected \\textbf{AGFW-v2}\nand \\textbf{LFW-GOOGLE} \\cite{upchurch2016deep}.\nThen Policy network is trained using videos from \\textbf{300-VW} \\cite{shen2015first}. \nFinally, the video set from AGFW-v2 is used for evaluation.\n\n\\textbf{LFW-GOOGLE} \\cite{upchurch2016deep}: includes 44,697 high resolution images collected using the names of 5,512 celebrities. \nThis database does not have age annotation. \nTo obtain the age label, we employ the age estimator in \\cite{Rothe-IJCV-2016} for initial labels which are manually corrected as needed after estimation. \n\n\\textbf{300-VW} \\cite{shen2015first}: includes 218595 frames from 114 videos. Similar to the video set of AGFW-v2, the videos are movie or presentation sessions containing one face per frame.\n\n\n\\begin{figure}[t]\n\t\\centering \\includegraphics[width=1\\columnwidth]{DifferentAge_Ex.png}\n\t\\caption{\\textbf{Age Progression Results.} Given different frames of a subject, our approach can consistently synthesized the faces of that subject at different age groups.} \t\\label{fig:Video_AP_DifferentAgeGroup}\n\\end{figure}\n\n\\subsection{Implementation Details} \\label{subsec:imple}\n\n\\textbf{Data Setting.} In order to construct the neighbor sets for an input frames in young and old ages, images from AGFW-v2 and LFW-GOOGLE are combined and divided into 11 age groups from 10 to 65 with the age span of five years. \n\n\\textbf{Model Structure and Training.} For the policy network, we employ a neural network with two hidden layers of 4096 and 2048 hidden units, respectively. Rectified Linear Unit (ReLU) activation is adopted for each hidden layer. \nThe videos from 300-VW are used to train the policy network.\n\n\\textbf{Computational time.} Processing time of the synthesized process depends on the resolution of the input video frames.\nIt roughly takes from 40 seconds per $240 \\times 240$ frame or 4.5 minutes per video frame with the resolution of $900 \\times 700$.\nWe evaluate on a system using an Intel i7-6700 CPU@3.4GHz with an NVIDIA GeForce TITAN X GPU. \n\n\\subsection{Age Progression} \\label{subsec:agingresult}\nThis section demonstrates the validity of the approach for robustly and consistently synthesizing age-progressed faces across consecutive frames of input videos.\n\n\\textbf{Age Progression in frontal and off-angle faces.} \nFigs. \\ref{fig:Video_AP_frontal} and \\ref{fig:Video_AP_DifferentAgeGroup} illustrate our age-progression results across frames from AGFW-v2 videos that contain both frontal and off-angle faces. From these results, one can see that even in case of \\textit{frontal faces} (i.e. the major changes between frames come from facial expressions and movements of the mouth and lips), or \\textit{off-angle faces} (i.e. more challenging due to the pose effects in the combination of other variations), \nour proposed method is able to robustly synthesize aging faces. Wrinkles of soft-tissue areas (i.e. under the subject's eyes; around the cheeks and mouth) are coherent robust between consecutive synthesized frames. We also compare our methods against Temporal Non-volume Preserving (TNVP) approach \\cite{Duong_2017_ICCV} and Face Transformer (FT) \\cite{faceTransformer} in Fig. \\ref{fig:AP_Comparisons}. These results further show the advantages of our model when both TNVP and FT are unable to ensure the consistencies between frames, and may result in different age-progressed face for each input. Meanwhile, in our results, the temporal information is efficiently exploited. This emphasizes the crucial role of the learned policy network. \n\n\n\\begin{figure}[t]\n\t\\centering \\includegraphics[width=1\\columnwidth]{Fig_AgingComparison.jpg}\n\t\\caption{\\textbf{Comparisons between age progression approaches}. For each subject, the top row shows frames in the video at a younger age. The next three rows are our results, TNVP \\cite{Duong_2017_ICCV} and Face Transformer \\cite{faceTransformer}, respectively.}\n\t\\label{fig:AP_Comparisons}\n\\end{figure}\n\n\n\\textbf{Aging consistency.} Table \\ref{tb:Consistency_Eval} compares the aging consistency between different approaches.\nFor the \\textbf{\\textit{consistency measurement}}, we adopt the average inverted reward $r^{-1}$ of all frames for each synthesis video. \nFurthermore, to validate the \\textbf{\\textit{temporal smoothness}}, we firstly compute the optical flow, i.e. an estimation of image displacements, between frames of each video to estimate changes in pixels through time. Then we evaluate the differences ($\\ell_2$-norm) between the flows of the original versus synthesis videos. \nFrom these results, one can see that policy network has consistently and robustly shown its role on maintaining an appropriate aging amount embedded to each frame, and, therefore, producing smoother synthesis across frames in the output videos.\n\n\n\\subsection{Video Age-Invariant Face Recognition} \\label{subsec:recog}\nThe effectiveness of our proposed approach is also validated in terms the performance gain for cross-age face verification. With the present of RL approach, not only is the consistency guaranteed, but also are the improvements made in both matching accuracy and matching score deviation. We adapt one of the state-of-the-art deep face matching models in \\cite{deng2018arcface} for this experiment. \nWe set up the face verification as follows. For all videos with the subject's age labels in the video set of AGFW-v2, the proposed approach is employed to synthesize all video frames to the current ages of the corresponding subjects in the videos. Then each frame of the age-progressed videos is matched against the real face images of the subjects at the current age. The matching scores distributions between original (young) and aged frames are presented in Fig. \\ref{fig:MatchingScoreDistribution}. Compared to the original frames, our age-progressed faces produce higher matching scores and, therefore, improve the matching performance over original frames. Moreover, with the consistency during aging process, the score deviation is maintained to be low. This also helps to improve the overall performance further. The matching accuracy among different approaches is also compared in Table \\ref{tb:Consistency_Eval} to emphasize the advantages of our proposed model.\n\n\\begin{table}[t]\n\t\\footnotesize\n\t\\centering\n\t\\caption{Comparison results in terms of consistency and temporal smoothness (\\textit{smaller value indicates better consistency}); and matching accuracy (\\textit{higher value is better}). \n\t} \n\t\\label{tb:Consistency_Eval} \n\t\\small\n\t\\begin{tabular}{l c c c }\n\t\t\\Xhline{2\\arrayrulewidth}\n\t\t\\textbf{Method} & \n\t\t\\begin{tabular}{@{}c@{}}\\textbf{Aging} \\\\ \\textbf{Consistency} \\end{tabular}& \\begin{tabular}{@{}c@{}}\\textbf{Temporal}\\\\ \\textbf{Smoothness}\\end{tabular} & \\begin{tabular}{@{}c@{}}\\textbf{Matching}\\\\ \\textbf{Accuracy}\\end{tabular}\\\\\n\t\t\\hline\n\t\t\\begin{tabular}{@{}l@{}} Original Frames\\end{tabular} & $-$ & $-$ & 60.61\\%\\\\\n\t\t\\hline\n\t\tFT \\cite{faceTransformer} & 378.88 & 85.26 & 67.5\\%\\\\\n\t\tTNVP \\cite{Duong_2017_ICCV} & 409.45 & 87.01 & 71.57\\%\\\\\n\t\tIPCGANs \\cite{wang2018face_aging} & 355.91 & 81.45&73.17\\%\\\\\n\t\t\\hline\n\t\t\\hline\n\t\t\\textbf{Ours(Without RL)} & 346.25 & 75.7 & 78.06\\%\\\\\n\t\t\\textbf{Ours(With RL)} & \\textbf{245.64} & \\textbf{61.80} & \\textbf{83.67\\%}\\\\\n\t\t\\Xhline{2\\arrayrulewidth}\n\t\\end{tabular}\n\\end{table}\n\n\n\\begin{figure}[t]\n\t\\centering \\includegraphics[width=0.85\\columnwidth]{Aging_Matching_Score_Distribution.png}\n\t\\caption{The distributions of the matching scores (of each age group) between frames of original and age-progressed videos against real faces of the subjects at the current age. }\t\n\t\\label{fig:MatchingScoreDistribution}\n\\end{figure}\n\n\\section{Conclusions}\nThis work has presented a novel Deep RL based approach for age progression\nin videos.\nThe model inherits the strengths of both recent advances of deep networks and reinforcement learning techniques to synthesize aged faces of given subjects both plausibly and coherently across video frames.\nOur method can generate age-progressed facial likenesses in videos with consistently aging features across frames. Moreover, our method guarantees preservation of the subject's visual identity after synthesized aging effects.\n\n{\\small\n\\bibliographystyle{ieee_fullname}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nRecently, lots of attention has been devoted to studies of different systems in a space with\na deformed Heisenberg algebra that takes into account the quantum nature of space on the phenomenological level.\nThese works are motivated\nby several independent lines of investigations in string theory and quantum gravity (see, e.g., \\cite{gross, maggiore, witten}) which lead to the\nGeneralized Uncertainty Principle (GUP)\n\\begin{eqnarray}\n\\Delta X\\ge{\\hbar\\over2}\\left({1\\over \\Delta P}+\\beta\\Delta P\\right)\n\\end{eqnarray}\nand suggest the existence of the\nfundamental minimal length $\\Delta X_{\\rm min}=\\hbar\\sqrt\\beta$, which is\nof order of Planck's length $l_p=\\sqrt{\\hbar G\/c^3}\\simeq 1.6\\times 10^{-35}\\rm m$.\n\nIt was established that minimal length\ncan be obtained in the frame of small quadratic modification (deformation) of the Heisenberg algebra \\cite{Kem95,Kem96}\n\\begin{eqnarray}\n[X,P]=i\\hbar(1+\\beta P^2).\n\\end{eqnarray}\nIn the classical limit $\\hbar\\to 0$ the quantum-mechanical commutator for operators is replaced by the Poisson bracket for corresponding classical variables\n\\begin{eqnarray}\n{1\\over i\\hbar}[X,P]\\to\\{X,P\\},\n\\end{eqnarray}\nwhich in the deformed case reads\n\\begin{eqnarray}\n\\{X,P\\}=(1+\\beta P^2).\n\\end{eqnarray}\n\n We point out that historically the first algebra of that kind in the relativistic case was proposed by Snyder in 1947 \\cite{Snyder47}. But only after investigations in string theory and quantum gravity the considerable interest in the studies of physical properties of classical and quantum systems in spaces with deformed algebras appeared.\n\nObservation that GUP can be obtained from the deformed Heisenberg algebra opens the possibility to study the influence of minimal length on properties of physical systems on the quantum level as well as on the classical one.\n\nDeformed commutation relations bring new difficulties in the quantum\nmechanics as well as in the classical one. Only a few problems are known to be solved exactly.\nThey are: one-dimensional harmonic\noscillator with minimal uncertainty in position \\cite{Kem95} and\nalso with minimal uncertainty in position and momentum\n\\cite{Tkachuk1,Tkachuk2}, $D$-dimensional isotropic harmonic\noscillator \\cite{chang, Dadic}, three-dimensional Dirac oscillator\n\\cite{quesne},\n(1+1)-dimensional Dirac\noscillator within Lorentz-covariant deformed algebra \\cite{Quesne10909},\none-dimensional Coulomb problem\n\\cite{fityo}, and\nthe\nsingular inverse square\npotential with a minimal length \\cite{Bou1,Bou2}.\nThree-dimensional\nCoulomb problem with deformed Heisenberg algebra was studied within the perturbation theory \\cite{Brau,Benczik,mykola,Stet,mykolaOrb}.\nIn \\cite{Stet07} the scattering problem in the deformed space with minimal length was studied.\nThe ultra-cold\nneutrons in gravitational field with minimal length were considered in\n\\cite{Bra06,Noz10,Ped11}.\nThe influence of minimal length on Lamb's shift, Landau levels, and tunneling current in scanning tunneling microscope was studied \\cite{Das,Ali2011}.\nThe Casimir effect in a space with minimal length was examined in \\cite{Frassino}.\nIn \\cite{Vaki} the effect of noncommutativity and of the existence of a minimal length on the phase space of cosmological model was investigated.\nThe authors of paper \\cite{Batt}\nstudied various physical consequences which follow from the noncommutative Snyder space-time geometry.\nThe classical mechanics in a space with deformed Poisson brackets was studied\nin \\cite{BenczikCl,Fryd,Sil09}.\nThe composite system ($N$-particle system) in the deformed space with\nminimal length was studied in \\cite{Quesne10,Bui10}.\n\nNote that deformation of Heisenberg algebra brings not only technical difficulties in solving of corresponding equations\nbut also brings problems of fundamental nature.\nOne of them is the violation of the equivalence principle in\nspace with minimal length \\cite{Ali11}.\nThis is the result of assumption that the parameter of deformation\nfor\nmacroscopic bodies of different mass is unique.\nIn paper \\cite{Quesne10} we shown that the center of mass of a macroscopic body in deformed space is\ndescribed by an effective parameter of deformation, which is essentially smaller than the parameters of deformation for particles consisting the body. Using the result of \\cite{Quesne10} for the effective parameter of deformation we show that the equivalence principle in the space with minimal length can be recovered.\nIn section 3 we reproduce the result of \\cite{Quesne10} concerning the effective parameter of deformation for the center of mass on the classical level and in addition show that the independence of kinetic energy on the composition leads to the recovering of the equivalence principle in the space with deformed Poisson bracket.\n\n\\section{Free fall of particle in a uniform gravitational field}\n\n\nThe Hamiltonian of a particle (a macroscopic body which we consider as a point particle) of mass $m$ in a uniform gravitational field reads\n\\begin{eqnarray}\nH={P^2\\over 2m}-mgX,\n\\end{eqnarray}\nthe gravitational field is characterized by the factor $g$ is directed along the $x$ axis.\nNote that here the inertial mass ($m$ in the first term) is equal to the gravitational mass\n($m$ in the second one).\nThe Hamiltonian equations of motion in space with deformed Poisson brackets are as follows\n\\begin{eqnarray}\\label{dxp}\n\\dot{X}=\\{X,H\\}={P\\over m}(1+\\beta P^2),\\\\\n\\dot{P}=\\{P,H\\}=mg(1+\\beta P^2).\n\\end{eqnarray}\nWe impose zero initial conditions for position and momentum, namely $X=0$, and $P=0$ at $t=0$.\nThese equations can be solved easily.\nFrom the second equation we find\n\\begin{eqnarray}\nP={1\\over \\sqrt\\beta}\\tan(\\sqrt\\beta mgt).\n\\end{eqnarray}\nFrom the first equation we obtain for velocity\n\\begin{eqnarray}\\label{soldX}\n\\dot{X}={1\\over m\\sqrt\\beta}{\\tan(\\sqrt\\beta mgt)\\over\\cos^2(\\sqrt\\beta mgt)}\n\\end{eqnarray}\nand for position\n\\begin{eqnarray}\\label{solX}\nX={1\\over 2g m^2\\beta}\\tan^2(\\sqrt\\beta mgt).\n\\end{eqnarray}\nOne can verify that the motion is periodic with period $T={\\pi\\over m\\sqrt\\beta g}$. The particle moves from $X=0$\nto $X=\\infty$, then reflects from $\\infty$ and moves in the opposite direction to $X=0$.\nBut from the physical point of view this solution is correct only for time $t\\ll T$ when the velocity of particle\nis much smaller than the speed of light. In other cases, the relativistic mechanics must be considered.\n\nIt is instructive to write out the results for velocity and coordinate in the first order over $\\beta$:\n\\begin{eqnarray}\\label{soldXap}\n\\dot{X}=gt\\left(1+{4\\over 3}\\beta m^2g^2t^2\\right),\\\\\\label{solXap}\nX= {gt^2\\over2}\\left(1+{2\\over 3}\\beta m^2g^2t^2\\right).\n\\end{eqnarray}\nIn the limit\n$\\beta\\to 0$ we reproduce the well known results\n\\begin{eqnarray}\n\\dot{X}=gt, \\ \\\nX= {gt^2\\over2},\n\\end{eqnarray}\nwhere kinematic characteristics, such as velocity and position of a free-falling particle depend only on initial position and velocity of the particle and do not depend on the composition and mass of the particle.\nIt is in agreement with the weak equivalence principle, also known as the universality of free fall or the Galilean equivalence principle. Note that in the nondeformed case, when the Newtonian equation of motion in gravitational field is fulfilled the weak equivalence principle is noting else that the statement of equivalence\nof inertial and gravitational masses.\n\nAs we see from (\\ref{soldX}) and (\\ref{solX}) or (\\ref{soldXap}) and (\\ref{solXap}), in the deformed space\nthe trajectory of the point mass in the gravitational field depends on the mass of the particle if we suppose that\nparameter of deformation is the same for all bodies.\nSo, in this case the equivalence principle is violated.\nIn paper \\cite{Quesne10} we shown on the quantum level that in fact the motion of the center of mass of a composite system in deformed space is governed by an\neffective parameter (in \\cite{Quesne10} it is denoted as $\\tilde\\beta_0$, here we denote it as $\\beta$). So, the parameter of deformation for a macroscopic body\nis\n\\begin{eqnarray}\\label{betaN}\n\\beta=\\sum_i\\mu_i^3\\beta_i,\n\\end{eqnarray}\nwhere\n$\\mu_i=m_i\/\\sum_i m_i$, $m_i$ and $\\beta_i$ are the masses and parameters of deformation of particles which form composite system (body). Note that in the next section we derive this result considering kinetic energy of a body consisting of $N$ particles.\n\nFirstly, let us consider a special case $m_i=m_1$ and $\\beta_i=\\beta_1$ when body consists of the same elementary particles. Then we find\n\\begin{eqnarray}\n\\beta={\\beta_1\\over N^2},\n\\end{eqnarray}\nwhere $N$ is the number of particles of body with mass $m=Nm_1$.\nNote that expressions (\\ref{soldX}) and (\\ref{solX}) contain combination $\\sqrt\\beta m$.\nSubstituting the effective parameter of deformation\n$\\beta_1\/N^2$ instead of $\\beta$ we find\n\\begin{eqnarray}\n\\sqrt\\beta m=\\sqrt\\beta_1 m\/N=\\sqrt\\beta_1 m_1.\n\\end{eqnarray}\nAs a result, the trajectory now does not depend on the mass of the macroscopic body but depends on\n$\\sqrt\\beta_1 m_1$, which is the same for bodies of different mass.\nSo, the equivalence principle is recovered.\n\nThe general case when a body consists of the different elementary particles is more complicated.\nThen the situation is possible when different combinations of elementary particles\nlead to the same mass but with different effective parameters of deformation.\nThen the motion of bodies of equal\nmass but different composition will be different.\nThis also violates the weak equivalence principle.\nThe equivalence principle can be recovered when we suppose that\n\\begin{eqnarray}\\label{gamma}\n\\sqrt\\beta_1 m_1=\\sqrt\\beta_2 m_2=\\dots=\\sqrt\\beta_N m_N=\\gamma\n\\end{eqnarray}\nReally, then the effective parameter of deformation for a macroscopic body is\n\\begin{eqnarray}\n\\beta=\\sum_i{m_i^3\\over(\\sum_i m_i)^3}\\beta_i={\\gamma^2\\over(\\sum_i m_i)^2}={\\gamma^2\\over m^2}\n\\end{eqnarray}\nand thus\n\\begin{eqnarray}\n\\sqrt\\beta m=\\gamma,\n\\end{eqnarray}\nthat is the same as (\\ref{gamma}).\nNote, that the trajectory of motion in this case does not depend on mass and depends only on $\\gamma$\nwhich takes same value for all bodies.\nIt means that bodies of different mass and different composition move in a gravitational field in the same way\nand thus the weak equivalence principle is not violated when (\\ref{gamma}) is satisfied. Equation (\\ref{gamma}) brings one new fundamental constant $\\gamma$. Note that parameter $1\/\\gamma$ has the dimension of velocity. The parameters of deformation $\\beta_i$ of particles or macroscopic bodies of mass $m_i$ are determined by fundamental constant $\\gamma$ as follows\n\\begin{eqnarray}\\label{bg}\n\\beta_i={\\gamma^2\\over m_i^2},\n\\end{eqnarray}\nSo, the parameter of deformation is completely determined by the mass of a particle.\nIn the next section we derive formula (\\ref{betaN}) on the classical level and give some arguments concerning\nthe relation (\\ref{gamma}).\n\n\\section{Kinetic energy of a composite system in deformed space and parameter of deformation}\n\nIn this section we use the natural statement:\n{\\it The kinetic energy has the additivity property and does not depend on composition of a body but only on its mass.}\n\nFirstly, we consider {\\it the additivity property of the kinetic energy.}\nLet us consider $N$ particles with masses $m_i$ and deformation parameters $\\beta_i$.\nIt is equivalent to the situation when the macroscopic body is divided into $N$ parts which can be treated as point particles with corresponding masses and parameters of deformation.\nWe consider the case when each particle of the system moves with the same velocity as the whole system.\n\nLet us rewrite the kinetic energy as a function of velocity.\nFrom the relation between velocity and momentum (\\ref{dxp}) in the first approximation over $\\beta$\nwe find\n\\begin{eqnarray}\nP=m \\dot X(1-\\beta m^2\\dot X^2).\n\\end{eqnarray}\nThen the kinetic energy as a function of velocity in the first order approximation over $\\beta$ reads\n\\begin{eqnarray}\\label{TV}\nT={m\\dot X^2\\over 2}-\\beta m^3\\dot X^4.\n\\end{eqnarray}\n\nThe kinetic energy of the whole system is given by (\\ref{TV}) where $m=\\sum_i m_i$. On the other hand,\nthe kinetic energy of the whole system is the sum of kinetic energies of particles which constitute the system:\n\\begin{eqnarray}\\label{TVsum}\nT=\\sum_i T_i={m\\dot X^2\\over 2}-\\sum_i\\beta_i m_i^3\\dot X^4,\n\\end{eqnarray}\nwhere we take into account that velocities of all particles are the same as the velocity\nof the whole system $\\dot X_i=\\dot X$, $i=1,\\dots,N$.\nComparing (\\ref{TV}) and (\\ref{TVsum}) we obtain (\\ref{betaN}).\n\nNow let us consider {\\it the independence of kinetic energy on the composition of a body}.\nIt is enough to consider a body of a fixed mass consisting of two parts (particles) with masses $m_1=m\\mu$ and $m_2=m(1-\\mu)$, where $0\\le\\mu\\le1$. Parameters of deformation for the first and second particles are $\\beta_1=\\beta_{\\mu}$ and $\\beta_2=\\beta_{1-\\mu}$, here we write explicitly that\nparameters of deformations are some function of mass ($\\mu=m_1\/m$ is dimensionless mass).\nThe particles with different masses constitute the body with the same mass $m=m_1+m_2$.\nSo, in this situation we have the body of the same mass but with different composition.\n\nThe kinetic energy of the whole body is given by (\\ref{TV}) with the\nparameter of deformation\n\\begin{eqnarray}\\label{Eqbeta}\n\\beta=\\beta_{\\mu}\\mu^3+\\beta_{1-\\mu}(1-\\mu)^3.\n\\end{eqnarray}\nSince the kinetic energy does not depend on the composition, the parameter of deformation for the whole body must be fixed $\\beta={\\rm const}$ for different $\\mu$. Thus (\\ref{Eqbeta}) is the equation for $\\beta_{\\mu}$ as a function of $\\mu$ at fixed $\\beta$.\nOne can verify that the solution reads\n\\begin{eqnarray}\n\\beta_{\\mu}={\\beta\\over\\mu^2}.\n\\end{eqnarray}\nTaking into account that $\\mu=m_1\/m$ we find\n\\begin{eqnarray}\n\\beta_1 m_1^2=\\beta m^2\n\\end{eqnarray}\nthat corresponds to (\\ref{gamma}). So, the independence of the kinetic energy from composition leads to the one fundamental constant $\\gamma^2=\\beta m^2$. Then parameters of deformation $\\beta_i$ of particles or composite bodies\nof different masses $m_i$ are\n$\\beta_i=\\gamma^2\/m_i^2$\nthat is in agreement with relation (\\ref{bg}).\n\\section{Conclusions}\nOne of the main results of the paper is the expression for the parameter of deformation\nfor particles or bodies of different mass (\\ref{bg})\nwhich recovers the equivalence principle and thus the equivalence principle is reconciled with the\ngeneralized uncertainty principle. It is necessary to stress that\nexpression (\\ref{bg}) was derived also in section 3 from the\ncondition of the independence of kinetic energy on composition.\n\nNote that (\\ref{bg}) contains the same constant $\\gamma$ for different particles and parameter of deformation\nis inverse to the squared mass.\nThe constant $\\gamma$ has dimension inverse to velocity. Therefore, it is convenient to introduce\na dimensionless constant $\\gamma c$, where $c$ is the speed of light.\nIn order to make some speculations concerning the possible value of $\\gamma c$\nwe suppose that for the electron the parameter of deformation $\\beta_e$ is related to Planck's\nlength, namely\n\\begin{eqnarray}\n\\hbar\\sqrt\\beta_e=l_p=\\sqrt{\\hbar G\/c^3}.\n\\end{eqnarray}\nThen we obtain\n\\begin{eqnarray}\n\\gamma c=c\\sqrt\\beta m_e=\\sqrt{\\alpha{Gm_e^2\\over e^2}}\\simeq 4.2\\times 10^{-23},\n\\end{eqnarray}\nwhere $\\alpha=e^2\/\\hbar c$ is the fine structure constant.\n\nFixing the parameter of deformation for electron we can calculate the\nparameter of deformation for particles or bodies of different mass. It is more instructive to write\nthe minimal length for space where the composite body of mass $m$ lives:\n\\begin{eqnarray}\n\\hbar\\sqrt\\beta={m_e\\over m}\\hbar\\sqrt\\beta_e={m_e\\over m }l_p.\n\\end{eqnarray}\nAs an example let us consider nucleons (proton or\nneutron). The parameter of deformation for nucleons $\\beta_{\\rm nuc}$ or minimal length for nucleons\nreads\n$\\hbar\\sqrt\\beta_{\\rm nuc}\\simeq l_p\/1840.$\nSo, the effective minimal length for nucleons is three order smaller than that for electrons.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nIn many fluid dynamic scenarios the compressibility of a liquid is negligible. This allows for simplifications such that direct numerical simulations can rely on simpler incompressible models. In the context of droplet impingement incompressibility is only justified for small impact speeds. High impact speeds trigger compressibility effects of the liquid droplet which can determine the flow dynamics significantly. Examples for high speed droplet impact scenarios can be found in many industrial applications such as liquid-fueled engines, spray cooling or spray cleaning. In \\cite{Haller2002} it has been shown that incompressible models are not adequate to describe high speed impacts, especially due to the fact that the jetting dynamics are influenced by a developing shock wave in the liquid phase \\cite{Haller2003}. The time after the impact of the droplet until jetting occurs is actually smaller than the predicted time of incompressible models due to the shock wave pattern. In \\cite{Haller2002} a compressible sharp-interface model is used for the simulations. However, sharp-interface models become intricate in the presence of changes in droplet topology and contact line motion. For this reason, we introduce a diffuse-interface model in this contribution, namely a compressible Navier--Stokes--Allen--Cahn phase field model which allows for complex interface morphologies and dynamic contact angles.\n\\section{Phase Field Models}\n\\label{sec:model}\nPhase field models form a special class of diffuse-interface models. In contrast to sharp-interface models, the interface has a (small) finite thickness and in the interfacial region the different fluids are allowed to mix. An additional variable, the \\emph{phase field}, is introduced which allows to distinguish the different phases. This concept has the advantage that only one system of partial differential equations on the entire considered domain needs to be solved, whereas for sharp-interface models bulk systems need to be solved which are coupled across the interface by possibly complex conditions. Based on energy principles phase field models can be derived in a thermodynamic framework, see \\cite{Anderson1998,Freistuehler2016} for an overview. They\nfulfill the second law of thermodynamics meaning that the Clausius--Duhem inequality \\cite{Truesdell1952} is fulfilled. In the case of isothermal models this is equivalent to an energy inequality. There are several (quasi-)incompressible \\cite{Lowengrub1998,Abels2012}, compressible \\cite{Blesgen1999,Dreyer2014,Witterstein2010} and recently even incompressible--compressible phase field models \\cite{Repossi2017,Ostrowski2019}. In this section we introduce a compressible Navier--Stokes--Allen--Cahn model.\n\n\\subsection{A Compressible Navier--Stokes--Allen--Cahn system}\n\\label{sec:NSAC}\n\nWe consider a viscous fluid at constant temperature. The fluid is assumed to exist in two phases, a liquid phase denoted by subscript $\\mathrm{L}$ and a vapor phase denoted by subscript $\\mathrm{V}$. In each phase the fluid is thermodynamically described by the corresponding Helmholtz free energy density $\\varrho f_{\\mathrm{L\/V}}(\\varrho)$. The fluid occupies a domain $\\Omega \\subset \\mathbb{R}^d, \\ d\\in \\mathbb{N}$.\nLet $\\varrho >0 $ be the density of the fluid, $\\vec{v} \\in \\mathbb{R}^d$ the velocity and $\\varphi \\in [0,1]$ the phase field. Following \\cite{Dreyer2014} we assume that the dynamics of the fluid is described by the isothermal compressible Navier--Stokes--Allen--Cahn system.\n\\begin{align}\n\\partial_t \\varrho + \\operatorname{div}(\\varrho \\mathbf{v}) &= 0, \\label{eq:NSAC1}\\\\\n\\partial_t(\\varrho \\mathbf{v}) + \\operatorname{div}(\\varrho \\mathbf{v}\\otimes \\mathbf{v}+{p\\mathbf{I}})&= \\operatorname{div}(\\mathbf{S}) - \\gamma \\operatorname{div}(\\nabla \\varphi\\otimes \\nabla \\varphi) \\ \\text{ in } \\Omega \\times (0,T), \\label{eq:NSAC2}\\\\\n\\partial_t(\\varrho \\varphi) + \\operatorname{div}(\\varrho \\varphi \\mathbf{v}) &= -\\eta \\mu. \\label{eq:NSAC3} \n\\end{align}\nThe Helmholtz free energy density $\\varrho f$ is defined as \n\\begin{align}\n\\varrho f(\\varrho,\\varphi,\\nabla\\varphi) &= h(\\varphi)\\varrho f_\\mathrm{L}(\\varrho)+(1-h(\\varphi)) \\varrho f_\\mathrm{V}(\\varrho) + \\frac{1}{\\gamma}W(\\varphi) + \\frac{\\gamma}{2}|\\nabla\\varphi|^2 \\label{eq:rhof} \\\\\n&=: \\varrho \\psi(\\varphi,\\varrho) + \\frac{1}{\\gamma}W(\\varphi) + \\frac{\\gamma}{2} |\\nabla \\varphi|^2.\n\\end{align}\nIt consists of the interpolated free energy densities $\\varrho f_{\\mathrm{L\/V}}$ of the pure liquid and vapor phases with the nonlinear interpolation function \n\\begin{align}\\label{eq:def_h}\nh(\\varphi) = 3 \\varphi^2 - 2 \\varphi^3,\n\\end{align}\nand a mixing energy \\cite{Cahn1958} using the double well potential $W(\\varphi)=\\varphi^2(1-\\varphi)^2$.\n\nThe hydrodynamic pressure $p$ is determined through the Helmholtz free energy $\\varrho f$ by the thermodynamic relation\n\\begin{equation}\\label{eq:pdef}\np=p(\\varrho,\\varphi) = -\\varrho f(\\varrho,\\varphi)+\\varrho \\frac{\\partial (\\varrho f)}{\\partial \\varrho}(\\varrho, \\varphi).\n\\end{equation}\nWe define the generalized chemical potential \n\\begin{equation}\n\\mu = \\frac{1}{\\gamma}W'(\\varphi)+ \\frac{\\partial (\\varrho\\psi)}{\\partial \\varphi}-\\gamma \\Delta \\varphi,\n\\end{equation}\nwhich steers the phase field variable into equilibrium.\nAdditionally, we denote by $\\eta>0$ the (artificial) mobility.\n\nThe dissipative viscous part of the stress tensor reads as $\\mathbf{S}=\\mathbf{S}(\\varphi,\\nabla\\mathbf{v})= \\nu(\\varphi) (\\nabla \\mathbf{v}+ \\nabla \\mathbf{v}^\\top - \\operatorname{div}(\\mathbf{v})\\mathbf{I})$ with an interpolation of the viscosities $\\nu_{\\mathrm{L\/V}}$ of the pure phases $\\nu(\\varphi) = h(\\varphi)\\nu_\\mathrm{L} + (1-h(\\varphi))\\nu_\\mathrm{V} > 0$.\n\nThe total energy of the system \\eqref{eq:NSAC1}-\\eqref{eq:NSAC3} at time $t$ is defined as\n\\begin{align}\n\\label{eq:etot}\nE(t) &\\mathrel{\\vcenter{\\baselineskip0.5ex \\lineskiplimit0pt\n \\hbox{\\scriptsize.}\\hbox{\\scriptsize.}}\n = E_\\mathrm{free}(t) + E_\\mathrm{kin}(t) \\nonumber\\\\ \n &= \\int_\\Omega \\varrho(\\vec{x},t) f(\\varrho(\\vec{x},t),\\varphi(\\vec{x},t),\\nabla \\varphi(\\vec{x},t)) + \\frac{1}{2} \\varrho(\\vec{x},t) |\\mathbf{v}(\\vec{x},t)|^2 \\operatorname{~ d\\!} \\mathbf{x}.\n\\end{align}\n\n\\begin{remark} \\\n\\begin{enumerate}\n\\item \nThe phase field $\\varphi$ is in general an artificial variable, however in this case it can be viewed as a mass fraction $\\varphi = \\frac{m_\\mathrm{V}}{m},$ with the mass $m_\\mathrm{V}$ of the vapor constituent and the total mass $m$ of the fluid.\n\\item \nThe special form of the nonlinear interpolation function $h$ with $h'(0) = h'(1) \\neq 0$ guarantees that $\\eqref{eq:NSAC1}-\\eqref{eq:NSAC3}$ allows for physical meaningfull equilibria. This can be easily seen by considering a static single-phase equilibrium $\\vec{v} = \\boldsymbol{0}, \\varphi \\equiv 0$. If $h'(0)\\neq 0$ then the right hand side of the phase field equation \\eqref{eq:NSAC3} does not vanish.\n\\end{enumerate}\n\\end{remark}\nAssuming an impermeable wall, the velocity must satisfy the boundary condition\n\\begin{equation}\n\\mathbf{v}\\cdot \\mathbf{n} = 0 \\quad \\text{ on $\\partial \\Omega$}. \\label{eq:bc1}\n\\end{equation}\nAdditionally, the system is endowed with initial conditions\n\\begin{align}\n\\label{eq:IC}\n\\varrho = \\varrho_0, \\quad \\vec{v} = \\vec{v}_0, \\quad \\varphi = \\varphi_0 \\quad \\text{ on } \\Omega \\times \\{0\\},\n\\end{align}\nusing suitable functions $(\\varrho_0,\\vec{v}_0, \\varphi_0)\\colon \\Omega \\to \\mathbb{R}^+ \\times \\mathbb{R}^d \\times [0,1]$.\n\nHowever, in order to close the system \\eqref{eq:bc1} does not suffice. In the following section we derive a complete set of boundary conditions that allow for moving contact lines (MCL).\n\n\n\\subsection{Boundary Conditions}\n\\label{sec:bc}\nThe system \\eqref{eq:NSAC1}-\\eqref{eq:NSAC3} needs to be complemented with initial and boundary conditions. We are interested in MCL problems. With a sharp interface point of view, the contact line is the intersection of the liquid-vapor interface with the solid wall. The requirement of a contact line moving along the wall renders the derivation of boundary conditions nontrivial. Figure \\ref{fig:MCL} depicts a sketch of a compressible droplet impact scenario with the rebound shock wave dynamics and a moving contact line.\n\\begin{figure}[h]\n\\centering\\includegraphics[width=.649\\textwidth]{MCL.pdf}\n\\caption{Sketch of a compressible droplet impingement on a flat wall with moving contact line.}\n\\label{fig:MCL}\n\\end{figure}\nWe derive appropriate boundary conditions to handle MCL problems with the phase field system \\eqref{eq:NSAC1}-\\eqref{eq:NSAC3} in this section. \n\n\n\nFor the incompressible case so called \\emph{general Navier boundary conditions} (GNBC) have been derived \\cite{Qian2003,QIAN2006}. Motivated by these works we extend GNBC to the compressible case.\n\n\nBecause phase field modelling goes well with energy principles we add a wall free energy term $\\int_{\\partial \\Omega} g(\\varphi) \\operatorname{~ d\\!} s$ to the total energy $E$ from \\eqref{eq:etot} and obtain \n\\begin{align}\\label{eq:total_energy}\nE_{\\mathrm{tot}}(t) &= E(t) + E_{\\mathrm{wall}}(t) \\nonumber\\\\\n&= \\int_\\Omega \\varrho(t) f(\\varrho(t),\\varphi(t),\\nabla \\varphi(t)) + \\frac{1}{2} \\varrho(t) |\\mathbf{v}(t)|^2 \\operatorname{~ d\\!} \\mathbf{x} + \\int_{\\partial \\Omega} g(\\varphi(t)) \\operatorname{~ d\\!} s.\n\\end{align}\n\nHere $g(\\varphi)$ is the interfacial free energy per unit area at the fluid-solid boundary depending only on the local composition \\cite{QIAN2006}. The specific choice for $g$ is motivated by Young's equation.\nWith a sharp interface point of view we have\n\\begin{align}\\label{eq:young}\n\\sigma \\cos(\\theta_s)= \\sigma_\\mathrm{S}-\\sigma_\\mathrm{LS},\n\\end{align}\nwith the surface free energy $\\sigma$ of the liquid, the static contact angle $\\theta_\\mathrm{s}$, surface free energy $\\sigma_\\mathrm{S}$ of the solid, and interfacial free energy $\\sigma_{\\mathrm{LS}}$ between liquid and solid, see Figure \\ref{fig:young}. We prescribe the difference in energy for $g$, i.e.\n\\begin{align}\n\\sigma_\\mathrm{S}-\\sigma_\\mathrm{LS}=g(0)-g(1).\n\\end{align}\n\n\\begin{figure}[h]\n\\centering\\includegraphics[width=.6\\textwidth]{young_equation.pdf}\n\\caption{Illustration of Young's equation $\\sigma \\cos(\\theta_s)= \\sigma_\\mathrm{S}-\\sigma_\\mathrm{LS}.$}\n\\label{fig:young}\n\\end{figure}\n \nThen, we choose a smooth interpolation between the values $\\pm\\frac{\\Delta g}{2} = \\pm \\frac{g(1)-g(0)}{2}$. However, it was shown in \\cite{Qian2003} that the choice of the kind interpolation has no large impact. Hence, for reasons of consistency we use $h$ as interpolation function.\nWith \\eqref{eq:young} we obtain \n\\begin{equation}\ng(\\varphi) \\mathrel{\\vcenter{\\baselineskip0.5ex \\lineskiplimit0pt\n \\hbox{\\scriptsize.}\\hbox{\\scriptsize.}}}%\n = -\\sigma\n\\cos(\\theta_s) \\left(h(\\varphi)-\\frac{1}{2}\\right).\n\\end{equation} \nA variation $\\delta \\varphi$ of $\\varphi$ leads to a variation $\\delta E_\\mathrm{tot}$ of the total energy \\eqref{eq:total_energy}, that is\n\\begin{align*}\n\\delta E_\\mathrm{tot} = \\int_\\Omega \\mu \\delta \\varphi \\operatorname{~ d\\!} \\vec{x} - \\int_{\\partial \\Omega} L(\\varphi)\\frac{\\partial\\varphi}{\\partial_{\\boldsymbol{\\tau}}} \\delta\\varphi_{\\boldsymbol{\\tau}}.\n\\end{align*}\nHere,\n\\[L(\\varphi) \\coloneqq \\gamma \\frac{\\partial \\varphi}{\\partial \\mathbf{n}}+g'(\\varphi)\\] \ncan be interpreted as uncompensated Young stress \\cite{Qian2003}. The boundary tangential vector is denoted by $\\boldsymbol{\\tau}$ and $\\vec{n}$ denotes the outer normal. Thus, $L(\\varphi)=0$ is the Euler--Lagrange equation at the fluid-solid boundary for minimizing the total energy \\eqref{eq:total_energy} with respect to the phase field variable. We assume a boundary relaxation dynamics for $\\varphi$ given by\n\\begin{equation}\n\\partial_t \\varphi + \\mathbf{v}\\cdot \\nabla_{\\boldsymbol{\\tau}} \\varphi = -\\frac{\\alpha}{\\varrho} L(\\varphi),\n\\end{equation}\nwith a relaxation parameter $\\alpha>0$. Here $\\nabla_{\\boldsymbol{\\tau}} \\mathrel{\\vcenter{\\baselineskip0.5ex \\lineskiplimit0pt\n \\hbox{\\scriptsize.}\\hbox{\\scriptsize.}}}%\n = \\nabla-(\\mathbf{n}\\cdot\\nabla)\\mathbf{n}$ is the gradient along the tangential direction. \n Since $\\mathbf{v} \\cdot \\mathbf{n} = 0$, we have $\\mathbf{v}\\cdot \\nabla_{\\boldsymbol{\\tau}}\\varphi = v_{\\boldsymbol{\\tau}}\\frac{\\partial \\varphi}{\\partial \\boldsymbol{\\tau}}$,\nand finally we obtain\n\\begin{align}\n\\partial_t \\varphi + v_{\\boldsymbol{\\tau}}\\frac{\\partial \\varphi}{\\partial \\boldsymbol{\\tau}} &= -\\frac{\\alpha}{\\varrho} L(\\varphi) \\quad \\text{ on $\\partial \\Omega$}.\\label{eq:bc3}\n\\end{align}\n\nIn order to complete the derivation of the GNBC we incorporate a slip velocity boundary condition. In single phase models the slip velocity is often taken proportional to the tangential viscous stress. However, in our case we also have to take the uncompensated Young stress into account. In \\cite{Qian2003} it is shown from molecular dynamic simulations that the slip velocity should be taken proportional to the sum of the tangential viscous stress and the uncompensated Young stress.\nHence, with the slip length $\\beta > 0$ we prescribe the boundary condition\n\\begin{equation}\n\\beta v_{\\boldsymbol{\\tau}} + \\nu(\\varphi) \\frac{\\partial v_{\\boldsymbol{\\tau}}}{\\partial \\mathbf{n}} - L(\\varphi)\n\\frac{\\partial \\varphi}{\\partial \\boldsymbol{\\tau}} =0 \\quad \\text{ on $\\partial \\Omega$}. \\label{eq:bc2}\n\\end{equation}\n\nAway from the interface the last term in \\eqref{eq:bc2} drops out and we have the classical Navier-slip condition but in the interface region the additional term acts and allows for correct contact line movement.\n\nIn summary we obtain the following GNBC for the MCL problem\n\\begin{align}\n\\mathbf{v}\\cdot \\mathbf{n} &= 0, \\label{eq:gnbc1} \\\\\n\\beta v_{\\boldsymbol{\\tau}} + \\nu(\\varphi) \\frac{\\partial v_{\\boldsymbol{\\tau}}}{\\partial \\mathbf{n}} - L(\\varphi)\n\\frac{\\partial \\varphi}{\\partial \\boldsymbol{\\tau}} &=0, \\hspace*{6ex} \\text{ on $\\partial \\Omega$.} \\label{eq:gnbc2} \\\\\n\\partial_t \\varphi + v_{\\boldsymbol{\\tau}}\\frac{\\partial \\varphi}{\\partial \\boldsymbol{\\tau}} &= -\\frac{\\alpha}{\\varrho} L(\\varphi)\\label{eq:gnbc3}\n\\end{align}\n\nThe GNBC \\eqref{eq:gnbc1}, \\eqref{eq:gnbc2}, \\eqref{eq:gnbc3} contain certain subcases. For $\\alpha \\to \\infty$ we obtain the static contact angle boundary condition and with $\\beta \\to \\infty$ we end up with no-slip boundray conditions.\n\n\n\\subsection{Energy Inequality}\n\\label{sec:energy_ineq}\nFor isothermal models thermodynamical consistency means to verify that solutions of the problem at hand admit an energy inequality. Precisely, we have for the system \\eqref{eq:NSAC1}-\\eqref{eq:NSAC3} the following result.\n\\begin{thm}[Energy inequality]\\label{thm:energy_ineq}\nLet $(\\varrho,\\mathbf{v},\\varphi)$ with values in $(0,\\infty)\\times \\mathbb{R}^d \\times [0,1]$ be a classical solution of \\eqref{eq:NSAC1}-\\eqref{eq:NSAC3} in $(0,T) \\times \\Omega$ satisfying the boundary conditions \\eqref{eq:gnbc1} - \\eqref{eq:gnbc3} on $(0,T) \\times \\partial \\Omega$. Then for all $t \\in (0,T)$ the following energy inequality holds:\n\\begin{flalign} \\label{eq:energy_ineq}\n&\\frac{\\operatorname{d}}{\\operatorname{~ d\\!} t} E_\\mathrm{tot}(t) = \\frac{\\operatorname{d}}{\\operatorname{~ d\\!} t} (E_\\mathrm{free}(t) + E_\\mathrm{kin}(t) +E_\\mathrm{wall}(t)) & \\nonumber\\\\\n=&\\frac{\\operatorname{d}}{\\operatorname{~ d\\!} t} \\left(\\int_\\Omega \\varrho f(\\varrho,\\mathbf{v},\\varphi,\\nabla \\varphi) + \\frac{1}{2} \\varrho |\\mathbf{v}|^2 \\operatorname{~ d\\!} \\mathbf{x} + \\int_{\\partial \\Omega} g(\\varphi) \\operatorname{~ d\\!} s\\right) & \\nonumber \\\\ \n=&-\\int_\\Omega \\frac{\\eta}{\\varrho} \\mu^2 \\operatorname{~ d\\!} \\mathbf{x} - \\int_\\Omega \\mathbf{S} \\colon \\nabla \\mathbf{v} \\operatorname{~ d\\!} \\mathbf{x} \\nonumber\\\\\n&- \\int_{\\partial \\Omega} \\beta |v_{\\boldsymbol{\\tau}}|^2 \\operatorname{~ d\\!} s - \\int_{\\partial \\Omega} \\frac{\\alpha}{\\varrho} |L(\\varphi)|^2 \\operatorname{~ d\\!} s \\leq 0.\n\\end{flalign}\n\\end{thm}\nAs expected the energy inequality renders phase transition, viscosity, wall slip, and composition relaxation at the solid interface to be drivers of energy with respect to entropy dissipation.\n\\begin{proof}\nIn a straightforward way we compute:\n\n\\begin{align*}\n\\frac{\\operatorname{~ d\\!}}{\\operatorname{~ d\\!}t} E_\\mathrm{tot}(t) =& \\frac{\\operatorname{~ d\\!}}{\\operatorname{~ d\\!}t} \\left(\\int_\\Omega \\varrho f(\\varrho,\\varphi,\\nabla\\varphi) + \\frac{1}{2}\\varrho |\\vec{v}|^2 \\operatorname{~ d\\!}\\vec{x} + \\int_{\\partial \\Omega} g(\\varphi) \\operatorname{~ d\\!} s \\right) \\\\\n=& \\frac{\\operatorname{~ d\\!}}{\\operatorname{~ d\\!}t} \\left(\\int_\\Omega\\frac{1}{\\gamma} W(\\varphi) + \\varrho\\psi(\\varrho,\\varphi) + \\frac{\\gamma}{2} |\\nabla\\varphi|^2 + \\frac{1}{2}\\varrho |\\vec{v}|^2 \\operatorname{~ d\\!} \\vec{x}+ \\int_{\\partial \\Omega} g(\\varphi) \\operatorname{~ d\\!} s \\right)\\\\\n=& \\int_\\Omega \\varphi_t \\left(\\frac{1}{\\gamma}W'(\\varphi)+\\frac{\\partial (\\varrho \\psi)}{\\partial \\varphi}-\\gamma\\Delta \\varphi\\right) + \\varrho_t \\left(\\frac{\\partial (\\varrho \\psi)}{\\partial \\varrho} - \\frac{1}{2} |\\vec{v}|^2\\right) + (\\varrho\\vec{v})_t \\cdot \\vec{v} \\operatorname{~ d\\!}\\vec{x} \\\\\n&+ \\int_{\\partial \\Omega} \\varphi_t(g'(\\varphi) + \\gamma \\nabla \\varphi \\cdot \\vec{n}) \\operatorname{~ d\\!} s. \\\\\n\\intertext{Now we use \\eqref{eq:NSAC1}-\\eqref{eq:NSAC3} to replace the time derivatives in the volume integrals. Using \\eqref{eq:pdef} we obtain after basic algebraic manipulations}\n\\frac{\\operatorname{~ d\\!}}{\\operatorname{~ d\\!}t} E_\\mathrm{tot}(t) =& - \\int_\\Omega \\operatorname{div}(\\varrho\\vec{v})\\left(\\frac{\\partial (\\varrho\\psi)}{\\partial\\varrho}-\\frac{1}{2}|\\vec{v}|^2\\right) + \\operatorname{div}(\\varrho \\vec{v}\\otimes\\vec{v})\\cdot\\vec{v} \\operatorname{~ d\\!} \\vec{x} - \\int_\\Omega \\frac{\\eta}{\\varrho} \\mu^2 \\operatorname{~ d\\!} \\vec{x} \\\\\n&- \\int_\\Omega \\vec{v}\\cdot \\varrho \\nabla\\left(\\frac{\\partial (\\varrho \\psi)}{\\partial \\varrho}\\right) - \\operatorname{div}(\\vec{S})\\cdot\\vec{v} \\operatorname{~ d\\!} \\vec{x}\n+ \\int_{\\partial \\Omega} \\varphi_t L(\\varphi) \\operatorname{~ d\\!} s. \\\\ \\intertext{We integrate by parts and have}\n\\frac{\\operatorname{~ d\\!}}{\\operatorname{~ d\\!}t} E_\\mathrm{tot}(t) =& - \\int_\\Omega \\frac{\\eta}{\\varrho} \\mu^2 \\operatorname{~ d\\!} \\vec{x} - \\int_\\Omega \\vec{S}\\colon \\nabla \\vec{v} \\operatorname{~ d\\!} \\vec{x}+ \\int_{\\partial \\Omega} \\varphi_t L(\\varphi) \\operatorname{~ d\\!} s \\\\\n&+ \\int_{\\partial \\Omega} \\vec{S}\\vec{v}\\cdot\\vec{n} - \\varrho\\vec{v}\\left(\\frac{\\partial (\\varrho \\psi)}{\\partial \\varrho}+\\frac{1}{2}|\\vec{v}|^2\\right)\\cdot\\vec{n}\\operatorname{~ d\\!} s. \\\\ \\intertext{With the boundary conditions \\eqref{eq:gnbc1}-\\eqref{eq:gnbc3} we finally obtain}\n\\frac{\\operatorname{~ d\\!}}{\\operatorname{~ d\\!}t} E_\\mathrm{tot}(t) =& - \\int_\\Omega \\frac{\\eta}{\\varrho} \\mu^2 \\operatorname{~ d\\!} \\vec{x} - \\int_\\Omega \\vec{S}\\colon \\nabla \\vec{v} \\operatorname{~ d\\!} \\vec{x} - \\int_{\\partial \\Omega} \\beta |v_{\\boldsymbol{\\tau}}|^2 \\operatorname{~ d\\!} s - \\int_{\\partial \\Omega} \\frac{\\alpha}{\\varrho} |L(\\varphi)|^2 \\operatorname{~ d\\!} s.\n\\end{align*}\nThis concludes the proof.\n\\end{proof}\n\n\n\\subsection{Surface Tension}\n\\label{sec:surface_tens}\nThere are different interpretations of surface tension. It can be either viewed as a force acting in tangential direction of the interface or as excess energy stored in the interface \\cite{Jamet2002}.\nIn line with our energy-based derivation we consider a planar equilibrium profile and integrate the excess free energy density over this profile.\nWe assume that static equilibrium conditions hold, i.e. $\\vec{v} = \\boldsymbol{0}.$\nThe planar profile is assumed to be parallel to the $x$-axis and density, velocity and phase field are independent from $t, y,$ and $z$. \nThen the equilibrium is governed by the solution of the following boundary value problem on the real line. \n\nFind $\\varrho=\\varrho(x), \\varphi=\\varphi(x)$ such that\n\\begin{align}\n\\left(-\\varrho\\psi-\\frac{1}{\\gamma}W(\\varphi) - \\frac{\\gamma}{2}\\varphi_x^2 +\\varrho \\frac{\\partial (\\varrho \\psi)}{\\partial\\varrho}\\right)_x &= -{\\gamma(\\varphi_x^2)}_x, \\label{eq:eq1} \\\\\n\\frac{1}{\\gamma} W'(\\varphi) + \\frac{\\partial (\\varrho \\psi)}{\\partial \\varphi} - \\gamma \\varphi_{xx} &= 0,\\label{eq:eq2}\n\\end{align}\nand\n\\begin{equation}\n\\varrho(\\pm\\infty) = \\varrho_\\mathrm{V\/L}, \\quad \\varphi(-\\infty) = 0 , \\quad \\varphi(\\infty) = 1, \\quad \\varphi_x(\\pm\\infty)=0. \\label{eq:eqbc}\n\\end{equation}\n\nMultiplying \\eqref{eq:eq2} with $\\varphi_x$ and substracting from \\eqref{eq:eq1} yields\n\\begin{equation}\n\\frac{\\partial (\\varrho \\psi)}{\\partial \\varrho} = const. \n\\label{eq:muconst}\n\\end{equation}\nMultiplying \\eqref{eq:eq2} with $\\varphi_x$, integrating from $-\\infty$ to some $x\\in \\mathbb{R}$ using \\eqref{eq:eq1} and \\eqref{eq:eqbc} leads to\n\n\\begin{equation}\n\\frac{1}{\\gamma} W(\\varphi(x)) + \\varrho(x)\\psi(\\varrho(x),\\varphi(x)) - \\varrho_\\mathrm{V}(x)\\psi(\\varrho_\\mathrm{V}(x),0) = \\frac{\\gamma}{2}\\varphi_x^2(x).\n\\label{eq:surfacetenshelp}\n\\end{equation}\nFrom \\eqref{eq:surfacetenshelp} we obtain for $x\\to \\infty$\n\\begin{equation}\n\\varrho_\\mathrm{L}\\psi(\\varrho_\\mathrm{L},1) = \\varrho_\\mathrm{V}\\psi(\\varrho_\\mathrm{V},0) \\eqqcolon \\overline{\\varrho\\psi}.\n\\label{eq:surfacetenshelp1}\n\\end{equation}\n\nAs mentioned before, surface tension can be defined by means of excess free energy. Roughly speaking an excess quantity is the difference of the quanity in the considered system and in a (sharp interface) reference system where the bulk values are maintained up to a dividing interface.\nThe interface position $x_0$ is determined by vanishing excess density.\n\n\nIn summary we define surface tension $\\sigma$ via the relationship\n\\begin{align}\n\\sigma =& \\int_{-\\infty}^{x_0} \\varrho f(\\varrho,0,\\varphi,\\varphi_x) - \\varrho_\\mathrm{V}\\psi(\\varrho_\\mathrm{V},0) \\operatorname{~ d\\!} x \\nonumber \\\\\n&+\\int_{x_0}^\\infty \\varrho f(\\varrho,0,\\varphi,\\varphi_x) - \\varrho_\\mathrm{L}\\psi(\\varrho_\\mathrm{L},1) \\operatorname{~ d\\!} x, \\\\\n\\intertext{where $(\\varrho,\\varphi)$ is a solution of \\eqref{eq:eq1}-\\eqref{eq:eqbc}. Using \\eqref{eq:surfacetenshelp} we have}\n\\sigma =& \\int_{-\\infty}^{x_0} \\gamma \\varphi_x^2 \\operatorname{~ d\\!} x + \\int_{x_0}^\\infty \\gamma \\varphi_x^2 + (\\varrho_\\mathrm{V}\\psi(\\varrho_\\mathrm{V},0)-\\varrho_\\mathrm{L}\\psi(\\varrho_\\mathrm{L},1)) \\operatorname{~ d\\!} x. \\\\ \n\\intertext{With \\eqref{eq:surfacetenshelp1} it follows}\n\\sigma =& \\int_{-\\infty}^{\\infty} \\gamma \\varphi_x^2 \\operatorname{~ d\\!} x = \\sqrt{2} \\int_{\\varphi_\\mathrm{V}}^{\\varphi_\\mathrm{L}} \\sqrt{W(\\varphi)+\\gamma(\\varrho\\psi(\\varrho(\\varphi),\\varphi)- \\overline{\\varrho\\psi})} \\operatorname{~ d\\!} \\varphi.\n\\end{align}\n\n\n\nIn the last step we used the transformation from $x$ to $\\varphi$ integration. This is possible since $\\varrho$ can be written in dependence on $\\varphi$: Assuming convex free energies $\\varrho f_\\mathrm{L\/V}$ in \\eqref{eq:rhof}, we have convex $\\varrho\\psi$ in $\\varrho$ and from \\eqref{eq:muconst} follows with the implicit function theorem $\\varrho = \\varrho(\\varphi)$.\n\nOne can see that the surface tension is mainly dictated by the double well potential $W(\\varphi)$. There is a contribution due to the equations of state of the different phases, however in the sharp interface limit, i.e. $\\gamma \\to 0$ this contribution vanishes. This is a difference to (quasi-)incompressible models like \\cite{Lowengrub1998}. There is no contribution due to the equation of states and the surface tension is purely determined by the double well function. Of course surface tension is a material parameter and given by physics dependening on the fluids and walls considered. Therefore, in simulations the double well should be scaled accordingly to yield the correct surface tension.\n\n\n\\section{Numerical Experiments}\n\\label{sec:num_exp}\nThe phase field system \\eqref{eq:NSAC1}-\\eqref{eq:NSAC3} is of mixed hyperbolic-parabolic type. This complicates the derivation of discretization methods. An appropriate choice are discretizations based on discontinuous Galerkin methods. In fact even versions which reproduce the energy dissipation precisely are available \\cite{Giesselmann2015a,Repossi2017,Kraenkel2018}. The key idea behind those schemes is to achieve stabilization through the exact approximation of the energy, that means the energy inequality \\eqref{eq:energy_ineq} should be fullfilled exactly on the discrete level without introducing numerical dissipation. This helps to prevent increase of energy and possibly associated spurious currents. Additionally, the schemes are designed such that they preserve the total mass. Motivated by \\cite{Giesselmann2015a,Kraenkel2018} we derived such a scheme for the system \\eqref{eq:NSAC1}-\\eqref{eq:NSAC3},\nfor details we refer to \\cite{Ostrowski2018}.\nIn the following we present three numerical simulations using this scheme.\n\n\n\n\\subsection{Choice of Parameters}\nFor the equations of state in the bulk phases, we choose stiffened gas equations\n\\begin{align*}\n\\varrho f_\\mathrm{L\/V}(\\varrho) = \\alpha_\\mathrm{L\/V} \\varrho \\ln(\\varrho) + (\\beta_\\mathrm{L\/V}-\\alpha_\\mathrm{L\/V})\\varrho + \\gamma_\\mathrm{L\/V},\n\\end{align*}\nwith parameters $\\alpha_\\mathrm{L\/V} > 0, \\beta_\\mathrm{L\/V} \\in \\mathbb{R}, \\gamma_\\mathrm{L\/V} \\in \\mathbb{R}$.\nIn order to avoid prefering on of the phases, we choose the minima of the two free energies to be at the same height.\n\nDue to surface tension the density inside a liquid droplet is slightly higher than the value which minimizes $\\varrho f_\\mathrm{L}$. The value of the surrounding vapor is slightly lower than the minimizer of $\\varrho f_\\mathrm{V}$. We choose the initial density profile accordingly. For the bulk viscosities we set $\\nu_\\text{L} = 0.0125$ and $\\nu_\\text{V} = 0.00125$. If not stated otherwise, the capillary parameter is taken $\\gamma=5\\cdot 10^{-4}$ and the mobility $\\eta=10$. The polynomial order of the DG polynomials is $2$.\n\n\n\\subsection{Merging Droplets}\\label{sec:ex1}\n\nIn order to illustrate that phase field models are able to handle topological changes, we consider the example of two merging droplets. Initially we have no velocity field, $\\vec{v}_0 = \\boldsymbol{0}$ and look at two kissing droplets. The computational domain is $[0,1]\\times[0,1]$. The droplets are located at $(0.39,0.5)$ and $(0.6,0.5)$ with radii $0.08$ and $0.12$. The parameters for the equations of states are $\\alpha_\\mathrm{L} = 5 ,\\beta_\\mathrm{L}= -4, \\gamma_\\mathrm{L}= 11,\\alpha_\\mathrm{V} = 1.5 ,\\beta_\\mathrm{V}= 1.8, \\gamma_\\mathrm{V}=0.324$. The inital density profile is smeared out with value $\\varrho_\\mathrm{L}=2.23$ inside and $\\varrho_\\mathrm{V} = 0.3$ outside the droplet. As expected the droplets merge into one larger droplet. This evolution with $\\eta = 10$ is depicted in Figure \\ref{fig:merging}.\n\n\\begin{figure}[h]\n\\centering\\includegraphics[width=.9\\textwidth]{merging_190924.png}\n\\caption{Merging droplets. Density $\\varrho$ at times $t=0$, $t=0.2$, and $t=2$ for $\\eta = 10.$}\n\\label{fig:merging}\n\\end{figure}\n\nWe can observe that the model handles topological changes easily. However, the dynamics of the phase field relaxation are determined by the mobility $\\eta$ which needs to be chosen according to the problem. This is illustrated in Figure \\ref{fig:mergingenergy}, where the total energy over time for different values of the mobility $\\eta$ is plotted.\n\\begin{SCfigure}\n\\centering\\includegraphics[height=.4\\textwidth]{merging_energy.png}\n\\caption{Total energy $E_\\mathrm{tot}$ over time for droplet merging simulation with different values for the mobility $\\eta$.}\n\\label{fig:mergingenergy}\n\\end{SCfigure}\n\nThe numerical scheme is designed to mimic the energy inequality \\eqref{eq:energy_ineq} on the discrete level. The discrete energy decreases, as expected from \\eqref{eq:energy_ineq} the higher the value of $\\eta$, the higher the energy dissipation. \n\n\n\\subsection{Contact Angle}\n\\label{sec:ex2}\nIn this example we adress droplet wall interactions. We consider the case of static contact angle. This means we let the relaxation parameter $\\alpha$ in \\eqref{eq:bc3} tend to infinity.\nIn the limit we obtain the static contact angle boundary conditions:\n\nWe set the static contact angle $\\theta_s = 0.1\\pi \\approx 18^\\circ$. The computational domain, density values and EOS parameters are like in Section \\ref{sec:ex1}. As initial condition we use a droplet sitting on a flat surface with a contact angle of $90^\\circ$. The droplet position is $(0.5,0)$ with radius $0.2$. Since the initial condition is far away from equilibrium we have dynamics on the wall-boundary towards the equilibrium configuration. Thus, we can observe a wetting dynamic, see Figure \\ref{fig:wetting}. \n\n\\begin{figure}[h]\n\\centering\\includegraphics[width=.9\\textwidth]{contact_angle190924.png}\n\\caption{Wetting of smooth wall with (GNBC) boundary conditions for the static limit $\\alpha\\to\\infty$ and contact angle $\\theta_s=0.1\\pi$. Density $\\varrho$ at $t=0$ and $t=0.9$.}\n\\label{fig:wetting}\n\\end{figure}\n\nThe wall contribution leads to a large force on the boundary, which renders the system stiff. Although we have an implicit scheme we increased the interface width to be able to handle the boundary terms. Hence, we chose in this simulation $\\gamma = 10^{-2}$.\n\n\n\\subsection{Droplet Impingement}\n\\label{sec:ex3}\nWith this example we consider droplet impingement. The computational domain is the same as in Section \\ref{sec:ex1}. As initial condition we use a droplet at $(0.5,0.2)$ with radius $0.1$.\nThe parameters for the equations of states are $\\alpha_\\mathrm{L} = 5 ,\\beta_\\mathrm{L}= -0.8, \\gamma_\\mathrm{L}= 5.5,\\alpha_\\mathrm{V} = 1.5 ,\\beta_\\mathrm{V}= 1.8, \\gamma_\\mathrm{V}=0.084$. The inital density profile is smeared out with value $\\varrho_\\mathrm{L}=1.2$ inside and $\\varrho_\\mathrm{V} = 0.3$ outside the droplet.\nIn contrast to sharp interface models based on the Navier--Stokes equations, phase field models can still have contact line movement even if no-slip conditions are used. This is due to the fact that the contact line is regularized and the dynamics are driven by evolution in the phase field variable rather than advective transport. This can be seen in Figure \\ref{fig:impact} where a droplet impact with noslip conditions is simulated. This is a special case of the GNBC, with $\\alpha \\to \\infty$ and $\\beta\\to \\infty$. \n\n\\begin{figure}[h!]\n\\centering\\includegraphics[width=.86\\textwidth]{impact_mu_comparison_crop}\n\\caption{\nDroplet impact simulation. Density $\\rho$ and chemical potential $\\mu$ at times $t=0.005, t=0.13,t=0.21$.\n}\n\\label{fig:impact}\n\\end{figure}\n\nIt can be seen that the generalized chemical potential $\\mu$ is low at the contact line which leads to fast dynamics in the phase field. This leads to a moving contact line. Additionally, we can see the (smeared out) shock waves in the vapor phase and also in the liquid phase where the shocks move faster due to a higher speed of sound in the liquid phase.\n\n\n\\section{Summary and Conclusions}\nIn this work we presented a phase field approach to model and simulate compressible droplet impingement scenarios. To be precise, we introduced a compressible Navier-Stokes-Allen-Cahn model in Section \\ref{sec:NSAC}. We discussed modelling aspects, with emphasis on the energy-based derivation. We highlighted the connection of thermodynamic consistency with an energy inequality. Further, we proved in Theorem \\ref{thm:energy_ineq} that solutions to the system fulfill this inequality. Surface tension can be interpreted as excess free energy. We quantified the amount of surface tension present in the model in Section \\ref{sec:surface_tens}. Moving contact line problems need special attention with respect to boundary conditions. Hence, physical relevant boundary conditions were derived as Generalized Navier Boundary Conditions in Section \\ref{sec:bc}. In Section \\ref{sec:num_exp} numerical examples were given. In future work we implement the general, dynamic version of the GNBC to obtain jetting phenomena in the impact case.\n\n\n\n\n\\section*{Acknowledgments}\nThe authors kindly acknowledge the financial support of this work by the Deutsche Forschungsgemeinschaft (DFG)\nin the frame of the International Research Training Group \"Droplet Interaction Technologies\" (DROPIT).\n\n\n\\begin{small}\n\\bibliographystyle{abbrv}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzjcuh b/data_all_eng_slimpj/shuffled/split2/finalzzjcuh new file mode 100644 index 0000000000000000000000000000000000000000..2fbcffe5b0a263d493db1adc64c85ebf1a1ec9d0 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzjcuh @@ -0,0 +1,5 @@ +{"text":"\\section{#1} \\setcounter{equation}{0}}\n\\newcommand{\\varepsilon}{\\varepsilon}\n\\begin{document}\n\n\\title[Convergence for a degenerate parabolic equation with gradient source]{Convergence to separate variables solutions for a degenerate parabolic equation with gradient source}\n\n\\author{Philippe Lauren\\c cot}\n\\address{Institut de Math\\'ematiques de Toulouse, CNRS UMR~5219, Universit\\'e de Toulouse, F--31062 Toulouse Cedex 9, France}\n\\email{laurenco@math.univ-toulouse.fr}\n\\author{Christian Stinner}\n\\address{Fakult\\\"at f\\\"ur Mathematik, Universit\\\"at Duisburg-Essen, D--45117 Essen, Germany}\n\\email{christian.stinner@uni-due.de}\n\n\\date{\\today}\n\n\\begin{abstract}\nThe large time behaviour of nonnegative solutions to a quasilinear degenerate diffusion equation with a source term depending solely on the gradient is investigated. After a suitable rescaling of time, convergence to a unique profile is shown for global solutions. The proof relies on the half-relaxed limits technique within the theory of viscosity solutions and on the construction of suitable supersolutions and barrier functions to obtain optimal temporal decay rates and boundary estimates. Blowup of weak solutions is also studied.\n\\end{abstract}\n\n\\maketitle\n\n\\mysection{Introduction}\\label{sec:int}\n\nQualitative properties of nonnegative solutions to\n\\begin{eqnarray}\n\\partial_t u -\\Delta_p u & = & |\\nabla u|^q\\,, \\qquad (t,x)\\in Q:= (0,\\infty)\\times\\Omega\\,, \\label{a0a}\\\\\nu & = & 0\\,, \\qquad (t,x)\\in (0,\\infty)\\times\\partial\\Omega\\,, \\label{a0b} \\\\\nu(0) & = & u_0\\,, \\quad x\\in\\Omega\\,, \\label{a0c}\n\\end{eqnarray}\nvary greatly according to the relative strength of the (possibly nonlinear and degenerate) diffusion $\\Delta_p u := \\mbox{ div }\\left( |\\nabla u|^{p-2}\\ \\nabla u \\right)$ and the source term $|\\nabla u|^q$ which is measured by the exponents $p\\ge 2$ and $q>0$. More precisely, if $q\\in (0,p-1)$, the comparison principle fails to be valid for the corresponding stationary equation \\cite{BB01} and the existence of non-zero steady states is expected. The latter is known to be true for $p=2$ and $q\\in (0,1)$ for a general bounded domain $\\Omega$ \\cite{BDD92,Lau07} and for $p>2$ and $q\\in (0,p-1)$ if $\\Omega=B(0,1)$ is the open unit ball of $\\mathbb R^N$ \\cite{BLS10,Sti10}. A complete classification of nonnegative steady states seems nevertheless to be lacking in general, except in space dimension $N=1$ \\cite{Lau07,Sti10} and when $\\Omega=B(0,1)$ for radially symmetric solutions \\cite{BLS10}. In these two particular cases, there is a one-parameter family $(w_\\vartheta)_{\\vartheta\\in [0,1]}$ of stationary solutions to \\eqref{a0a}-\\eqref{a0b} with the properties $w_0=0$ and $w_\\vartheta< w_{\\vartheta'}$ in $\\Omega$ if $\\vartheta<\\vartheta'$. In addition, each nonnegative solution to \\eqref{a0a}-\\eqref{a0c} converges as $t\\to\\infty$ to one of these steady states \\cite{BLS10,Lau07,Sti10} and the available classification of the steady states plays an important role in the convergence proof. The classification of nonnegative steady states to \\eqref{a0a}-\\eqref{a0b} and the large time behaviour of nonnegative solutions to \\eqref{a0a}-\\eqref{a0c} thus remain unsolved problems when $q\\in (0,p-1)$ and $\\Omega$ is an arbitrary bounded domain of $\\mathbb R^N$, $N\\ge 2$.\n\nThe situation is more clear for $q\\ge p-1$ as the comparison principle \\cite{BB01} guarantees that zero is the only stationary solution to \\eqref{a0a}-\\eqref{a0b}. Convergence to zero of nonnegative solutions to \\eqref{a0a}-\\eqref{a0c} is then expected in that case but the dynamics turn out to be more complicated as the gradient source term $|\\nabla u|^q$ induces finite time blowup for some solutions. More precisely, when $p=2$, global existence and convergence to zero for large times of solutions to \\eqref{a0a}-\\eqref{a0c} are shown in \\cite{BDHL07,SZ06,Tch10} when either $q\\in [1,2]$ or $q>2$ and $\\|u_0\\|_{C^1}$ is sufficiently small. The smallness condition on $u_0$ for $p=2$ and $q>2$ cannot be removed as finite time gradient blowup occurs for ``large'' initial data in that case \\cite{So02}. The blowup of the gradient then takes place on the boundary of $\\Omega$ \\cite{SZ06} and additional information on the blowup rate and location of the blowup points are provided in \\cite{GH08,LS10}. In addition, the continuation of solutions after the blowup time is studied in \\cite{BDL04} within the theory of viscosity solutions. Coming back to the convergence to zero of global solutions, still for $p=2$, the temporal decay rate and the limiting profile are identified in \\cite{BDHL07} when $q\\in (1,2]$ and shown to be that of the linear heat equation.\n\nTo our knowledge, the slow diffusion case $p>2$ has not been studied and the main purpose of this paper is to investigate what happens when $q\\ge p-1$ and $p>2$. Our results may be summarized as follows: let $\\Omega$ be a bounded domain of $\\mathbb R^N$ with smooth boundary $\\partial\\Omega$ (at least $C^2$) and consider an initial condition $u_0$ having the following properties:\n\\begin{equation}\\label{a1}\nu_0\\in C_0(\\bar{\\Omega}) := \\{ z \\in C(\\bar{\\Omega})\\ :\\ z=0 \\;\\;\\mbox{ on }\\;\\; \\partial\\Omega \\}\\,, \\quad u_0\\ge 0\\,, \\quad u_0\\not\\equiv 0\\,.\n\\end{equation}\nThen\n\\begin{description}\n\\item[(a)] if $q=p-1$, there is a unique global (viscosity) solution $u$ to \\eqref{a0a}-\\eqref{a0c} and $t^{1\/(p-2)} u(t)$ converges as $t\\to\\infty$ to a unique profile $f$ which does not depend on $u_0$. In addition, $u_\\infty: (t,x)\\longmapsto t^{-1\/(p-2)}\\ f(x)$ is the unique solution to \\eqref{a0a}-\\eqref{a0b} with an initial condition identically infinite in $\\Omega$, see Theorem~\\ref{tha1} below. The availability of solutions having infinite initial value in $\\Omega$ (also called \\textit{friendly giants}) and their stability are well-known for the porous medium equation $\\partial_t z - \\Delta z^m=0$, $m>1$, the $p$-Laplacian equation $\\partial_t z - \\Delta_p z=0$, $p>2$, and some related equations sharing a similar variational structure, see \\cite{MV94,RVPV09,Va07} for instance, but also for the semilinear diffusive Hamilton-Jacobi equation with gradient absorption $\\partial_t z - \\Delta z + |\\nabla z|^q=0$, $q>1$ \\cite{CLS89}.\n\\item[(b)] if $q\\in (p-1,p]$, there is a unique global (viscosity) solution $u$ to \\eqref{a0a}-\\eqref{a0c} and $t^{1\/(p-2)} u(t)$ converges as $t\\to\\infty$ to a unique profile $f_0$ which does not depend on $u_0$. In that case, $(t,x)\\longmapsto t^{-1\/(p-2)}\\ f_0(x)$ is the unique solution to the $p$-Laplacian equation $\\partial_t z - \\Delta_p z=0$ with homogeneous Dirichlet boundary conditions and an initial condition identically infinite in $\\Omega$, see Theorem~\\ref{tha2} below. Therefore, the gradient source term $|\\nabla u|^q$ does not show up in the large time dynamics.\n\\item[(c)] if $q>p$ and $u_0$ is sufficiently small, there is a unique global (viscosity) solution $u$ to \\eqref{a0a}-\\eqref{a0c} and $t^{1\/(p-2)} u(t)$ converges as $t\\to\\infty$ to $f_0$ as in the previous case, see Theorem~\\ref{tha2} below.\n\\item[(d)] if $q>p$ and $u_0$ is sufficiently large, then there is no global Lipschitz continuous weak solution to \\eqref{a0a}-\\eqref{a0c}, see Proposition~\\ref{prz1} below. Let us point out that, since the notion of solution used for this result differs from that used for the previous cases, it only provides an indication that the smallness condition is needed in case~\\textbf{(c)}.\n\\end{description}\n\nBefore stating precisely the main results, we point out that \\eqref{a0a} is a quasilinear degenerate parabolic equation which is unlikely to have classical solutions. It turns out that a suitable framework for the well-posedness of the initial-boundary value problem \\eqref{a0a}-\\eqref{a0c} is the theory of viscosity solutions (see, e.g., \\cite{BdCD97,Bl94,CIL92}) and we first define the notion of solutions to be used throughout this paper.\n\n\\begin{definition}\\label{defa0}\nConsider $u_0\\in C_0(\\bar{\\Omega})$ satisfying \\eqref{a1}. A function $u\\in C([0,\\infty)\\times\\bar{\\Omega})$ is a solution to \\eqref{a0a}-\\eqref{a0c} if $u$ is a viscosity solution to \\eqref{a0a} in $Q$ and satisfies\n$$\nu(t,x) = 0\\,, \\quad (t,x)\\in (0,\\infty)\\times\\partial\\Omega \\;\\;\\mbox{ and }\\;\\; u(0,x)=u_0(x)\\,, \\quad x\\in\\bar{\\Omega}\\,.\n$$\n\\end{definition}\n\nWe begin with the case $p>2$ and $q=p-1$.\n\n\\begin{theorem}\\label{tha1}\nAssume that $p>2$, $q=p-1$, and consider $u_0\\in C_0(\\bar{\\Omega})$ satisfying \\eqref{a1}. Then, there is a unique solution $u$ to \\eqref{a0a}-\\eqref{a0c} in the sense of Definition~\\ref{defa0} and\n\\begin{equation}\\label{a5}\n\\lim_{t\\to\\infty} \\left\\| t^{1\/(p-2)}\\ u(t) - f \\right\\|_\\infty = 0\\,,\n\\end{equation}\nwhere $f\\in C_0(\\bar{\\Omega})$ is the unique positive solution to\n\\begin{equation}\\label{a6}\n- \\Delta_p f - |\\nabla f|^{p-1} - \\frac{f}{p-2} =0 \\;\\;\\mbox{ in }\\;\\; \\Omega, \\quad f>0 \\;\\;\\mbox{ in }\\;\\;\\Omega\\,, \\quad f=0 \\;\\;\\mbox{ on }\\;\\; \\partial\\Omega\\,.\n\\end{equation}\nMoreover, $f\\in W^{1,\\infty}(\\Omega)$.\n\\end{theorem}\n\nLet us emphasize here that Theorem~\\ref{tha1} not only gives a description of the large time behaviour of $u$, but also provides the existence and uniqueness of the positive solution $f$ to \\eqref{a6}. To investigate the large time behaviour of $u$, no Liapunov functional seems to be available and we instead use the half-relaxed limits technique \\cite{BP88,CIL92}. To this end, several steps are needed, including a comparison principle for \\eqref{a6} which is established in Section~\\ref{sec:acl} and upper bounds which guarantee on the one hand that the solutions to \\eqref{a0a}-\\eqref{a0c} decay at the expected temporal decay rate and on the other hand that there is no loss of boundary conditions as discussed for instance in \\cite{BDL04}. The latter is achieved by the construction of suitable barrier functions. Also of importance is the positivity of the half-relaxed limits which allows us to apply the comparison lemma from Section~\\ref{sec:acl}.\n\nWe next turn to the case $q>p-1$ and establish the following result.\n\n\\begin{theorem}\\label{tha2}\nAssume that $p>2$, $q>p-1$, and consider $u_0\\in C_0(\\bar{\\Omega})$ satisfying \\eqref{a1}. If $q>p$, assume further that\n\\begin{equation}\\label{a1a}\nu_0(x) \\le \\frac{f(x)}{\\|\\nabla f\\|_\\infty}\\,, \\qquad x\\in\\bar{\\Omega}\\,,\n\\end{equation}\nwhere $f$ is the unique positive solution to \\eqref{a6}. Then, there is a unique solution $u$ to \\eqref{a0a}-\\eqref{a0c} in the sense of Definition~\\ref{defa0} and\n\\begin{equation}\\label{a5b}\n\\lim_{t\\to\\infty} \\left\\| t^{1\/(p-2)}\\ u(t) - f_0 \\right\\|_\\infty = 0\\,,\n\\end{equation}\nwhere $f_0\\in C_0(\\bar{\\Omega})$ is the unique positive solution to\n\\begin{equation}\\label{a7}\n- \\Delta_p f_0 - \\frac{f_0}{p-2} = 0 \\;\\;\\mbox{ in }\\;\\; \\Omega, \\quad f_0>0 \\;\\;\\mbox{ in }\\;\\;\\Omega\\,, \\quad f_0=0 \\;\\;\\mbox{ on }\\;\\; \\partial\\Omega\\,.\n\\end{equation}\n\\end{theorem}\n\nFor $q\\in[p-1,p]$, the well-posedness of \\eqref{a0a}-\\eqref{a0c} easily follows from \\cite{dL02} as already noticed in \\cite{BDL04} for $p=2$. For $q>p$ and an initial condition $u_0$ satisfying \\eqref{a1a}, it is a consequence of the Perron method and the comparison principle \\cite{CIL92}. As for the large time behaviour, the existence and uniqueness of $f_0$ is shown in \\cite{MV94} and the main contribution of Theorem~\\ref{tha2} is the convergence \\eqref{a5b}. The convergence proof follows the same lines as that of Theorem~\\ref{tha1} but a new difficulty has to be overcome in the case $q=p$ for the boundary estimates. We also show that, when $q\\in (p-1,p]$, powers of the positive solution $f$ to \\eqref{a6} with an exponent in $(0,1]$ allow us to construct separate variables supersolutions to \\eqref{a0a}-\\eqref{a0b}.\n\nFinally, the announced blowup result is proved in Section~\\ref{sec:bu} by a classical argument \\cite{HM04,QS07}.\n\n\\medskip\n\nFor further use, we introduce some notations: for $\\xi\\in\\mathbb R^N$ and $X\\in\\mathcal{S}(N)$, $\\mathcal{S}(N)$ being the space of $N\\times N$ real-valued symmetric matrices, we define the functions $F_0$ and $F$ by\n\\begin{eqnarray}\nF_0(\\xi,X) & := & - |\\xi|^{p-2}\\ tr(X) - (p-2)\\ |\\xi|^{p-4}\\ \\langle X \\xi , \\xi \\rangle\\,, \\label{ham1} \\\\\nF(\\xi,X) & := & F_0(\\xi,X) - |\\xi|^q\\,. \\label{ham2}\n\\end{eqnarray}\n\n\\mysection{A comparison lemma}\\label{sec:acl}\n\nAn important tool for the uniqueness of the positive solution to \\eqref{a6} and the identification of the half-relaxed limits is the following comparison lemma between positive supersolutions and nonnegative subsolutions to the elliptic equation in \\eqref{a6}.\n\n\\begin{lemma}\\label{leb1}\nLet $w\\in USC(\\bar{\\Omega})$ and $W\\in LSC(\\bar{\\Omega})$ be respectively a bounded upper semicontinuous (usc) viscosity subsolution and a bounded lower semicontinuous (lsc) viscosity supersolution to\n\\begin{equation}\\label{b1}\n- \\Delta_p \\zeta - |\\nabla \\zeta|^{p-1} - \\frac{\\zeta}{p-2} = 0 \\;\\;\\mbox{ in }\\;\\; \\Omega\\,,\n\\end{equation}\nsuch that\n\\begin{eqnarray}\nw(x) = W(x) = 0 & \\mbox{ for } & x\\in\\partial\\Omega\\,, \\label{b2} \\\\\nW(x) > 0 & \\mbox{ for } & x\\in\\Omega\\,. \\label{b3}\n\\end{eqnarray}\nThen\n\\begin{equation}\\label{b4}\nw \\le W \\;\\;\\mbox{ in }\\;\\; \\bar{\\Omega}\\,.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nFor $n\\ge N_0$ large enough, $\\Omega_n:= \\{ x\\in\\Omega\\ :\\ d(x,\\partial\\Omega)>1\/n\\}$ is a non-empty open subset of $\\Omega$. Since $\\overline{\\Omega_n}$ is compact and $W$ is lower semicontinuous, the function $W$ has a minimum in $\\overline{\\Omega_n}$ and the positivity \\eqref{b3} of $W$ in $\\overline{\\Omega_n}$ implies that\n\\begin{equation}\\label{b5}\n\\mu_n := \\min_{\\overline{\\Omega_n}}{\\{ W \\}}>0\\,.\n\\end{equation}\nSimilarly, the compactness of $\\bar{\\Omega}\\setminus\\Omega_n$ and the upper semicontinuity and boundedness of $w$ ensure that $w$ has at least one point of maximum $x_n$ in $\\bar{\\Omega}\\setminus\\Omega_n$ and we set\n\\begin{equation}\\label{b6}\n\\eta_n := w(x_n) = \\max_{\\bar{\\Omega}\\setminus\\Omega_n}{\\{w\\}} \\ge 0\\,,\n\\end{equation}\nthe maximum being nonnegative since $\\partial\\Omega\\subset \\bar{\\Omega}\\setminus\\Omega_n$ and $w$ vanishes on $\\partial\\Omega$ by \\eqref{b2}. We claim that\n\\begin{equation}\\label{b7}\n\\lim_{n\\to\\infty} \\eta_n=0\\,.\n\\end{equation}\nIndeed, owing to the compactness of $\\bar{\\Omega}$ and the definition of $\\Omega_n$, there are $y\\in\\partial\\Omega$ and a subsequence of $(x_n)_{n\\ge N_0}$ (not relabeled) such that $x_n\\to y$ as $n\\to\\infty$. Since $w(y)=0$, we deduce from the upper semicontinuity of $w$ that\n$$\n\\lim_{\\varepsilon\\to 0}\\ \\sup{\\{ w(x)\\ :\\ x\\in B(y,\\varepsilon)\\cap\\bar{\\Omega} \\}} \\le 0\\,.\n$$\nGiven $\\varepsilon>0$ small enough, there is $n_\\varepsilon$ large enough such that $x_n\\in B(y,\\varepsilon)\\cap\\bar{\\Omega}$ for $n\\ge n_\\varepsilon$ from which we deduce that\n$$\n0 \\le \\eta_n = w(x_n) \\le \\sup{\\{ w(x)\\ :\\ x\\in B(y,\\varepsilon)\\cap\\bar{\\Omega} \\}}\\,, \\quad n\\ge n_\\varepsilon\\,.\n$$\nTherefore,\n$$\n0 \\le \\limsup_{n\\to\\infty} \\eta_n \\le \\sup{\\{ w(x)\\ :\\ x\\in B(y,\\varepsilon)\\cap\\bar{\\Omega} \\}}\\,,\n$$\nand letting $\\varepsilon\\to 0$ allows us to conclude that zero is a cluster point of $(\\eta_n)_{n\\ge N_0}$ as $n\\to\\infty$. The claim \\eqref{b7} then follows from the monotonicity of $(\\eta_n)_{n\\ge N_0}$.\n\nNow, fix $s\\in (0,\\infty)$. For $\\delta>0$ and $n\\ge N_0$, we define\n\\begin{eqnarray*}\nz_n(t,x) & := & (t+s)^{-1\/(p-2)}\\ w(x) - s^{-1\/(p-2)}\\ \\eta_n\\,, \\quad (t,x)\\in [0,\\infty)\\times\\bar{\\Omega}\\,, \\\\\nZ_\\delta(t,x) & := & (t+\\delta)^{-1\/(p-2)}\\ W(x)\\,, \\quad (t,x)\\in [0,\\infty)\\times\\bar{\\Omega}\\,.\n\\end{eqnarray*}\nIt then follows from the assumptions on $w$ and $W$ that $z_n$ and $Z_\\delta$ are respectively a bounded usc viscosity subsolution and a bounded lsc viscosity supersolution to\n$$\n\\partial_t \\zeta - \\Delta_p \\zeta - |\\nabla \\zeta|^{p-1} = 0 \\;\\;\\mbox{ in }\\;\\; (0,\\infty)\\times\\Omega\\,,\n$$\nand satisfy\n$$\nZ_\\delta(t,x) = 0 \\ge - s^{-1\/(p-2)}\\ \\eta_n = z_n(t,x)\\,, \\quad (t,x)\\in (0,\\infty)\\times\\partial\\Omega\\,.\n$$\nMoreover, if\n\\begin{equation}\\label{b8}\n0 < \\delta < \\left[ \\frac{\\mu_n}{1+\\|w\\|_\\infty} \\right]^{p-2}\\ s\\,,\n\\end{equation}\nit follows from \\eqref{b5} and \\eqref{b8} that, for $x\\in\\Omega_n$,\n$$\nZ_\\delta(0,x) = \\delta^{-1\/(p-2)}\\ W(x) \\ge \\delta^{-1\/(p-2)}\\ \\mu_n \\ge s^{-1\/(p-2)}\\ \\|w\\|_\\infty \\ge z_n(0,x)\\,,\n$$\nand from \\eqref{b6} that, for $x\\in\\bar{\\Omega}\\setminus\\Omega_n$,\n$$\nZ_\\delta(0,x)\\ge 0 \\ge s^{-1\/(p-2)}\\ (w(x)-\\eta_n) = z_n(0,x)\\,.\n$$\nWe are then in a position to apply the comparison principle \\cite[Theorem~8.2]{CIL92} to conclude that\n\\begin{equation}\\label{b9}\nz_n(t,x)\\le Z_\\delta(t,x) \\,, \\qquad (t,x)\\in [0,\\infty)\\times\\bar{\\Omega}\\,,\n\\end{equation}\nfor any $\\delta>0$ and $n\\ge N_0$ satisfying \\eqref{b8}. According to \\eqref{b8}, the parameter $\\delta$ can be taken to be arbitrarily small in \\eqref{b9} from which we deduce that\n$$\n(t+s)^{-1\/(p-2)}\\ w(x) - s^{-1\/(p-2)}\\ \\eta_n \\le t^{-1\/(p-2)}\\ W(x)\\,, \\qquad (t,x)\\in (0,\\infty)\\times\\bar{\\Omega}\\,,\n$$\nfor $n\\ge N_0$. We next pass to the limit as $n\\to\\infty$ with the help of \\eqref{b7} to conclude that\n$$\n(t+s)^{-1\/(p-2)}\\ w(x) \\le t^{-1\/(p-2)}\\ W(x)\\,, \\qquad (t,x)\\in (0,\\infty)\\times\\bar{\\Omega}\\,.\n$$\nWe finally let $s\\to 0$ and take $t=1$ in the above inequality to obtain \\eqref{b4}.\n\\end{proof}\n\nA straightforward consequence of Lemma~\\ref{leb1} is the uniqueness of the friendly giant.\n\\begin{corollary}\\label{cob2}\nThere is at most one positive viscosity solution to \\eqref{a6}.\n\\end{corollary}\n\nArguing in a similar way, we have a similar result for the $p$-Laplacian:\n\\begin{lemma}\\label{leb3}\nLet $w\\in USC(\\bar{\\Omega})$ and $W\\in LSC(\\bar{\\Omega})$ be respectively a bounded usc viscosity subsolution and a bounded lsc viscosity supersolution to\n\\begin{equation}\\label{b10}\n- \\Delta_p \\zeta - \\frac{\\zeta}{p-2} = 0 \\;\\;\\mbox{ in }\\;\\; \\Omega\\,,\n\\end{equation}\nsatisfying \\eqref{b2} and \\eqref{b3}. Then $w \\le W$ in $\\bar{\\Omega}$.\n\\end{lemma}\n\n\\mysection{Well-posedness: $q\\in [p-1,p]$}\\label{sec:wp1}\n\n\\begin{proposition}\\label{pry1}\nAssume that $q\\in [p-1,p]$ and consider $u_0\\in C_0(\\bar{\\Omega})$ satisfying \\eqref{a1}. Then, there is a unique solution $u$ to \\eqref{a0a}-\\eqref{a0c} in the sense of Definition~\\ref{defa0}.\n\\end{proposition}\n\n\\begin{proof}\nSince the comparison principle holds true for \\eqref{a0a}-\\eqref{a0c} by \\cite[Theorem~8.2]{CIL92}, Proposition~\\ref{pry1} follows at once from \\cite[Corollary~6.2]{dL02} provided that $\\Sigma_-^p=\\Sigma_+^p=(0,\\infty)\\times\\partial\\Omega$, where the sets $\\Sigma_-^p$ and $\\Sigma_+^p$ are defined as follows: denoting the distance $d(x,\\partial\\Omega)$ from $x\\in\\bar{\\Omega}$ to $\\partial\\Omega$ by $d(x)$, $d$ is a smooth function in a neighbourhood of $\\partial\\Omega$ in $\\bar{\\Omega}$ and $(t,x)\\in (0,\\infty)\\times\\partial\\Omega$ belongs to $\\Sigma_-^p$ if\neither\n\\begin{equation}\\label{y1}\n\\liminf_{(y,\\alpha)\\to(x,0)} \\left[ F\\left( \\frac{\\nabla d(y) + o_\\alpha(1)}{\\alpha} , - \\frac{\\nabla d(y) \\otimes \\nabla d(y) + o_\\alpha(1)}{\\alpha^2} \\right) + \\frac{o_\\alpha(1)}{\\alpha} \\right] > 0\\,,\n\\end{equation}\nor\n\\begin{equation}\\label{y2}\n\\liminf_{(y,\\alpha)\\to(x,0)} \\left[ F\\left( \\frac{\\nabla d(y) + o_\\alpha(1)}{\\alpha} , \\frac{D^2 d(y) + o_\\alpha(1)}{\\alpha} \\right) + \\frac{o_\\alpha(1)}{\\alpha} \\right] > 0\\,.\n\\end{equation}\nSimilarly,\n$(t,x)\\in (0,\\infty)\\times\\partial\\Omega$ belongs to $\\Sigma_+^p$ if\neither\n\\begin{equation}\\label{y3}\n\\limsup_{(y,\\alpha)\\to(x,0)} \\left[ F\\left( \\frac{-\\nabla d(y) + o_\\alpha(1)}{\\alpha} , \\frac{\\nabla d(y) \\otimes \\nabla d(y) + o_\\alpha(1)}{\\alpha^2} \\right) + \\frac{o_\\alpha(1)}{\\alpha} \\right] < 0\\,,\n\\end{equation}\nor\n\\begin{equation}\\label{y4}\n\\limsup_{(y,\\alpha)\\to(x,0)} \\left[ F\\left( -\\frac{\\nabla d(y) + o_\\alpha(1)}{\\alpha} , -\\frac{D^2 d(y) + o_\\alpha(1)}{\\alpha} \\right) + \\frac{o_\\alpha(1)}{\\alpha} \\right] < 0\\,.\n\\end{equation}\n\nNow, consider $t>0$ and $x\\in\\partial\\Omega$. For $y\\in\\bar{\\Omega}$ sufficiently close to $x$ and $\\alpha \\in (0,1)$, we have\n\\begin{eqnarray*}\n& & \\alpha^p\\ \\left[ F\\left( \\frac{\\nabla d(y) + o_\\alpha(1)}{\\alpha} , - \\frac{\\nabla d(y) \\otimes \\nabla d(y) + o_\\alpha(1)}{\\alpha^2} \\right) + \\frac{o_\\alpha(1)}{\\alpha} \\right] \\\\\n& = & |\\nabla d(y) + o_\\alpha(1)|^{p-2}\\ (|\\nabla d(y)|^2 + o_\\alpha(1)) + (p-2)\\ |\\nabla d(y) + o_\\alpha(1)|^{p-4}\\ (|\\nabla d(y)|^4 + o_\\alpha(1)) \\\\\n& & \\ - \\alpha^{p-q}\\ |\\nabla d(y) + o_\\alpha(1)|^q + \\alpha^{p-1}\\ o_\\alpha(1) \\\\\n& = & (p-1)\\ |\\nabla d(y)|^p - \\alpha^{p-q}\\ |\\nabla d(y)|^q + o_\\alpha(1)\\,.\n\\end{eqnarray*}\nSince $|\\nabla d(x)|=1$ and $p\\ge q$, the right-hand side of the above inequality converges as $(y,\\alpha)\\to (x,0)$ either to $p-1$ if $q2$. Therefore, the condition \\eqref{y1} is satisfied so that $(t,x)$ belongs to $\\Sigma_-^p$. Similarly, for $y\\in\\bar{\\Omega}$ sufficiently close to $x$ and $\\alpha \\in (0,1)$, we have\n\\begin{eqnarray*}\n& & \\alpha^p\\ \\left[ F\\left( \\frac{-\\nabla d(y) + o_\\alpha(1)}{\\alpha} , \\frac{\\nabla d(y) \\otimes \\nabla d(y) + o_\\alpha(1)}{\\alpha^2} \\right) + \\frac{o_\\alpha(1)}{\\alpha} \\right] \\\\\n& = & - |\\nabla d(y) + o_\\alpha(1)|^{p-2}\\ (|\\nabla d(y)|^2 + o_\\alpha(1)) - (p-2)\\ |\\nabla d(y) + o_\\alpha(1)|^{p-4}\\ (|\\nabla d(y)|^4 + o_\\alpha(1)) \\\\\n& & \\ - \\alpha^{p-q}\\ |\\nabla d(y) + o_\\alpha(1)|^q + \\alpha^{p-1}\\ o_\\alpha(1) \\\\\n& = & - (p-1)\\ |\\nabla d(y)|^p - \\alpha^{p-q}\\ |\\nabla d(y)|^q + o_\\alpha(1)\\,,\n\\end{eqnarray*}\nfrom which we readily infer that the condition \\eqref{y3} is satisfied. Therefore, $(t,x)$ belongs to $\\Sigma_+^p$ and we have thus shown that $\\Sigma_-^P=\\Sigma_+^p=(0,\\infty)\\times\\partial\\Omega$ as expected.\n\\end{proof}\n\\mysection{Large time behaviour: $q\\in [p-1,p]$}\\label{sec:ltb}\n\nAs already mentioned in the Introduction, the proofs of Theorems~\\ref{tha1} and~\\ref{tha2} involve several steps: we first show in the next section (Section~\\ref{sec:ub}) that the temporal decay rate of $\\|u(t)\\|_\\infty$ is indeed $t^{-1\/(p-2)}$. To this end we construct suitable supersolutions which differ according to whether $q=p-1$ or $q>p-1$. In a second step (Section~\\ref{sec:le}), we prove boundary estimates for large times which guarantee that no loss of boundary conditions occurs throughout the time evolution. Here again, we need to perform different proofs for $q\\in [p-1,p)$ and $q=p$. The half-relaxed limits technique is then employed in Section~\\ref{sec:cv} to show the expected convergence after introducing self-similar variables, and the existence of a positive solution $f$ to \\eqref{a6} as well. The final result states that, if $u_0$ is bounded from above by $B\\ f^\\beta$ for some $B>0$ and $\\beta\\in (0,1]$, a similar bound holds true for $u(t)$ for positive times but with a possibly lower exponent $\\beta$ (Section~\\ref{sec:iub}).\n\n\\subsection{Upper bounds}\\label{sec:ub}\n\n\\begin{lemma}\\label{lec1} Assume that $q=p-1$. There is $C_1>0$ depending only on $p$, $q$, $\\Omega$, and $\\|u_0\\|_\\infty$ such that\n\\begin{equation}\\label{c1}\nu(t,x) \\le C_1\\ (1+t)^{-1\/(p-2)}\\,, \\qquad (t,x)\\in (0,\\infty)\\times\\bar{\\Omega}\\,.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nConsider $x_0\\not\\in\\bar{\\Omega}$ and $R_0>0$ such that $\\Omega\\subset B(x_0,R_0)$. For $A>0$, $R>R_0$, $t\\ge 0$, and $x\\in\\mathbb R^N$, we put $r=|x-x_0|$,\n$$\nS(t,x) := A\\ (1+t)^{-1\/(p-2)}\\ \\sigma(r)\\,, \\qquad \\sigma(r) := \\frac{p-1}{p}\\ \\left( e^{pR\/(p-1)} - e^{pr\/(p-1)} \\right)\\,,\n$$\nand assume that\n\\begin{equation}\\label{c2}\nA \\ge \\max{\\left\\{ \\left( \\frac{e^{pR\/(p-1)}}{(p-1)(p-2)} \\right)^{1\/(p-2)} , \\frac{\\|u_0\\|_\\infty}{\\sigma(R_0)} \\right\\}}\\,.\n\\end{equation}\n\nSince $x_0$ does not belong to $\\bar{\\Omega}$, the function $S$ is $C^\\infty$-smooth in $[0,\\infty)\\times\\bar{\\Omega}$ and, it follows from \\eqref{c2} that, for $(t,x)\\in Q$,\n\\begin{eqnarray*}\n& & (1+t)^{(p-1)\/(p-2)}\\ \\left\\{ \\partial_t S(t,x) + F(\\nabla S(t,x), D^2 S(t,x)) \\right\\}\\\\\n& = & -\\frac{A}{p-2}\\ \\sigma(r) - A^{p-1}\\ |\\sigma'(r)|^{p-1} \\\\\n& &\\ - (p-1)\\ A^{p-1}\\ |\\sigma'(r)|^{p-2}\\ \\sigma''(r) - (N-1)\\ A^{p-1}\\ \\frac{|\\sigma'(r)|^{p-2} \\sigma'(r)}{r} \\\\\n& = & A\\ \\left[ A^{p-2}\\ \\left( p-1+\\frac{N-1}{r} \\right)\\ e^{pr} - \\frac{\\sigma(r)}{(p-2)} \\right] \\\\\n& \\ge & A\\ \\left( (p-1)\\ A^{p-2} - \\frac{e^{pR\/(p-1)}}{(p-2)} \\right) \\ge 0\\,.\n\\end{eqnarray*}\nTherefore, the condition \\eqref{c2} guarantees that $S$ is a supersolution to \\eqref{a0a} in $Q$. In addition, since $|x-x_0|p-1$. There is $C_1>0$ depending only on $p$, $q$, $\\Omega$, and $\\|u_0\\|_\\infty$ such that\n\\begin{equation}\\label{c3}\nu(t,x) \\le C_1\\ (1+t)^{-1\/(p-2)}\\,, \\qquad (t,x)\\in (0,\\infty)\\times\\bar{\\Omega}\\,.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nConsider $x_0\\not\\in\\bar{\\Omega}$ and $R_0>0$ such that $\\Omega\\subset B(x_0,R_0)$. For $A>0$, $\\delta>0$, $R>R_0$, $t\\ge 0$, and $x\\in\\mathbb R^N$, we put $r=|x-x_0|$,\n$$\nS(t,x) := A\\ (1+\\delta t)^{-1\/(p-2)}\\ \\varphi(r)\\,, \\qquad \\varphi(r) := \\frac{p-1}{p}\\ \\left( R^{p\/(p-1)} - r^{p\/(p-1)} \\right)\\,,\n$$\nand assume that\n$$\nA = \\left( \\frac{N}{2R_0^{q\/(p-1)}} \\right)^{1\/(q-p+1)}\\,, \\quad R= \\left( R_0^{p\/(p-1)} + \\frac{p \\|u_0\\|_\\infty}{(p-1) A}\\right)^{(p-1)\/p}\\,, \\quad \\delta = \\frac{N (p-2) A^{p-2}}{2 R^{p\/(p-1)}} \\,.\n$$\n\nSince $x_0$ does not belong to $\\bar{\\Omega}$, the function $S$ is $C^\\infty$-smooth in $[0,\\infty)\\times\\bar{\\Omega}$ and, it follows from the properties $\\Omega\\subset B(x_0,R_0)$ and $q>p-1$ that, for $(t,x)\\in Q$,\n\\begin{eqnarray*}\n& & (1+\\delta t)^{(p-1)\/(p-2)}\\ \\left\\{ \\partial_t S(t,x) + F(\\nabla S(t,x), D^2 S(t,x)) \\right\\}\\\\\n& = & -\\frac{A \\delta}{p-2}\\ \\varphi(r) + N\\ A^{p-1} - A^q\\ (1+\\delta t)^{-(q-p+1)\/(p-2)}\\ r^{q\/(p-1)} \\\\\n& \\ge & A^{p-1}\\ \\left[ N - A^{q-p+1}\\ R_0^{q\/(p-1)} - \\frac{\\delta R^{p\/(p-1)}}{(p-2) A^{p-2}} \\right] \\\\\n& \\ge & 0\\,.\n\\end{eqnarray*}\nTherefore, the function $S$ is a supersolution to \\eqref{a0a} in $Q$ and the choice of $A$ and $R$ also guarantees that\n$$\nu_0(x) \\le \\|u_0\\|_\\infty \\le A\\ \\varphi(R_0) \\le S(0,x)\\,, \\qquad x\\in\\bar{\\Omega}\\,.\n$$\nFinally,\n$$\nu(t,x) = 0 \\le A\\ (1+\\delta t)^{-1\/(p-2)}\\ \\varphi(R_0) \\le S(t,x)\\,, \\qquad (t,x)\\in (0,\\infty)\\times\\partial\\Omega\\,,\n$$\nsince $|x-x_0|0$ such that\n$$\n|u(t,x)|=|u(t,x)-u(t,x_0)| \\le \\frac{L_1}{(1+t)^{1\/(p-2)}}\\ |x-x_0|\\,, \\quad (t,x,x_0)\\in [1,\\infty)\\times\\bar{\\Omega}\\times\\partial\\Omega\\,.\n$$\n\\end{lemma}\n\n\\begin{proof}\nSince the boundary $\\partial\\Omega$ of $\\Omega$ is smooth, it satisfies the uniform exterior sphere condition by \\cite[Section~14.6]{GT01}, that is, there is $R_\\Omega>0$ such that, for each $x_0\\in\\partial\\Omega$, there is $y_0\\in\\mathbb R^N$ satisfying $|x_0-y_0|=R_\\Omega$ and $B(y_0,R_\\Omega)\\cap\\Omega=\\emptyset$.\n\nWe fix positive real numbers $A$, $M$, and $\\delta$ such that\n\\begin{equation}\\label{c4}\nA := \\max{\\left\\{ M , \\frac{e C_1}{e-1} , \\left( \\frac{4 e^{p-1}}{p-2} \\right)^{1\/(p-2)} \\right\\}}\\,, \\quad M:= \\frac{2^{1\/(p-2)} \\|u_0\\|_\\infty}{2^{1\/(p-2)} - 1}\\,,\n\\end{equation}\nand\n\\begin{equation}\\label{c5}\n0< \\delta < \\min{\\left\\{ 1 , \\frac{(p-2) R_\\Omega}{N-1} , \\left( \\frac{1}{2 A^{q-p+1}} \\right)^{1\/(p-q)} \\right\\}}\\,, \\quad \\Omega_\\delta:= \\{ x\\in\\Omega\\ :\\ d(x,\\partial\\Omega)>\\delta \\} \\ne \\emptyset\\,,\n\\end{equation}\nthe constant $C_1$ being defined in Lemma~\\ref{lec1} if $q=p-1$ and Lemma~\\ref{lec2} if $q\\in (p-1,p)$.\n\nWe next consider $t_0\\ge 1$, $x_0\\in\\partial\\Omega$, and let $y_0\\in\\mathbb R^N$ be such that $|x_0-y_0|=R_\\Omega$ and $B(y_0,R_\\Omega)\\cap\\Omega=\\emptyset$. We define the open subset $U_{\\delta,x_0}$ of $\\mathbb R^N$ by\n$$\nU_{\\delta,x_0} := \\{ x\\in \\Omega\\ : R_\\Omega < |x-y_0| < R_\\Omega+\\delta \\}\\,,\n$$\nand the function\n$$\nS_{\\delta,x_0}(t,x) := \\frac{A}{(1+t)^{1\/(p-2)}}\\ \\left( 1 - e^{-(|x-y_0|-R_\\Omega)\/\\delta} \\right) + \\frac{M}{(1+t)^{1\/(p-2)}} - \\frac{M}{(1+t_0)^{1\/(p-2)}}\n$$\nfor $(t,x)\\in [0,t_0]\\times \\overline{U_{\\delta,x_0}}$. Since $y_0\\not\\in \\overline{U_{\\delta,x_0}}$, the function $S_{\\delta,x_0}$ is $C^\\infty$-smooth in $[0,t_0]\\times \\overline{U_{\\delta,x_0}}$. For $(t,x)\\in (0,t_0)\\times U_{\\delta,x_0}$, we set $r:=|x-y_0|-R_\\Omega\\in (0,\\delta)$ and compute\n\\begin{eqnarray*}\n& & \\frac{(1+t)^{(p-1)\/(p-2)}}{A^{p-1}}\\ \\left( \\partial_t S_{\\delta,x_0} - \\Delta_p S_{\\delta,x_0} - |\\nabla S_{\\delta,x_0}|^{q} \\right)(t,x) \\\\\n& = & - \\frac{1- e^{-r\/\\delta}}{(p-2) A^{p-2}} - \\frac{(N-1) e^{-(p-1)r\/\\delta}}{(r+R_\\Omega) \\delta^{p-1}} + \\frac{(p-1) e^{-(p-1)r\/\\delta}}{\\delta^p} \\\\\n& &\\ - \\frac{e^{-qr\/\\delta}}{\\delta^{q}}\\ \\frac{A^{q-p+1}}{(1+t)^{(q-p+1)\/(p-2)}} - \\frac{M}{(p-2) A^{p-1}}\\\\\n& \\ge & \\frac{e^{-(p-1)r\/\\delta}}{\\delta^p}\\ \\left[ p-1- \\frac{N-1}{r+R_\\Omega}\\ \\delta - \\frac{A^{q-p+1}\\ \\delta^{p-q}}{e^{(q-p+1)r\/\\delta}} - \\frac{\\delta^p e^{(p-1)r\/\\delta}}{(p-2) A^{p-2}} - \\frac{M \\delta^p e^{(p-1)r\/\\delta}}{(p-2) A^{p-1}} \\right] \\\\\n& \\ge & \\frac{e^{-(p-1)r\/\\delta}}{\\delta^p}\\ \\left[ p-1- \\frac{N-1}{R_\\Omega}\\ \\delta - A^{q-p+1}\\ \\delta^{p-q} - \\frac{e^{p-1}}{(p-2) A^{p-2}} - \\frac{M e^{p-1}}{(p-2) A^{p-1}} \\right] \\\\\n& \\ge & \\frac{e^{-(p-1)r\/\\delta}}{\\delta^p}\\ \\left[ 1 - A^{q-p+1}\\ \\delta^{p-q} - \\frac{2e^{p-1}}{(p-2) A^{p-2}} \\right] \\ge 0\\,,\n\\end{eqnarray*}\nthe last two inequalities being a consequence of the choice \\eqref{c4} and \\eqref{c5} of $\\delta$, $A$, and $M$. Therefore, $S_{\\delta,x_0}$ is a supersolution to \\eqref{a0a} in $(0,\\infty)\\times U_{\\delta,x_0}$. Moreover, since $t_0\\ge 1$, we have\n$$\nS_{\\delta,x_0}(0,x) \\ge M - \\frac{M}{2^{1\/(p-2)}} = \\|u_0\\|_\\infty \\ge u_0(x)\\,, \\quad x\\in \\overline{U_{\\delta,x_0}}\\,,\n$$\nby \\eqref{c4}. It also follows from \\eqref{c1} and \\eqref{c3} that $u(t,x)\\le C_1\\ (1+t)^{-1\/(p-2)}$ for $t\\ge 0$ and $x\\in\\bar{\\Omega}$. Then, if $(t,x)\\in (0,t_0)\\times \\partial U_{\\delta,x_0}$, either $x\\in\\partial\\Omega$ and $u(t,x)=0\\le S_{\\delta,x_0}(t,x)$. Or $r=|x-y_0|-R_\\Omega=\\delta$ and it follows from \\eqref{c4} that\n$$\nS_{\\delta,x_0}(t,x) \\ge \\frac{A (1-e^{-1})}{(1+t)^{1\/(p-2)}} \\ge \\frac{C_1}{(1+t)^{1\/(p-2)}} \\ge u(t,x)\\,.\n$$\nWe then deduce from the comparison principle \\cite[Theorem~8.2]{CIL92} that $u(t,x)\\le S_{\\delta,x_0}(t,x)$ for $t\\in [0,t_0]$ and $x\\in\\overline{U_{\\delta,x_0}}$. In particular, for $t=t_0$,\n$$\nu(t_0,x) \\le \\frac{A}{(1+t_0)^{1\/(p-2)}}\\ \\left( 1 - e^{-(|x-y_0|-R_\\Omega)\/\\delta} \\right)\\,, \\quad x\\in \\overline{U_{\\delta,x_0}}\\,.\n$$\nConsequently,\n$$\n0 \\le u(t_0,x) - u(t_0,x_0) = u(t_0,x) \\le \\frac{A}{(1+t_0)^{1\/(p-2)}}\\ \\left( 1 - e^{-(|x-y_0|-R_\\Omega)\/\\delta} \\right)\\,, \\quad x\\in \\overline{U_{\\delta,x_0}}\\,,\n$$\nwhence, since $|x_0-y_0|-R_\\Omega=0$,\n\\begin{equation}\\label{c6}\n0\\le u(t_0,x) - u(t_0,x_0) \\le \\frac{A}{\\delta (1+t_0)^{1\/(p-2)}}\\ |x-x_0|\\,, \\quad x\\in\\overline{U_{\\delta,x_0}}\\,.\n\\end{equation}\n\nConsider finally $x\\in\\Omega$ and $x_0\\in\\partial\\Omega$. If $|x-x_0| \\ge\\delta\/2$, it follows from \\eqref{c1} that\n$$\n|u(t_0,x)-u(t_0,x_0)| = u(t_0,x) \\le \\frac{2 C_1}{\\delta (1+t_0)^{1\/(p-2)}}\\ |x-x_0|\\,.\n$$\nIf $|x-x_0|<\\delta\/2$, let $y_0\\in\\mathbb R^N$ be such that $|x_0-y_0|=R_\\Omega$ and $B(y_0,R_\\Omega)\\cap\\Omega=\\emptyset$. Since $x\\in\\Omega$, $|x-y_0|>R_\\Omega$ and\n$$\n|x-y_0| \\le |x-x_0|+|x_0-y_0| < R_\\Omega+\\delta\\,.\n$$\nConsequently, $x\\in U_{\\delta,x_0}$ and we infer from \\eqref{c6} that\n$$\n|u(t_0,x) - u(t_0,x_0)| \\le \\frac{A}{\\delta (1+t_0)^{1\/(p-2)}}\\ |x-x_0|\\,.\n$$\n We have thus established Lemma~\\ref{lec3} with $L_1:= \\max{\\{ 2 C_1 , A \\}}\/\\delta$ for $(t,x,x_0)\\in[1,\\infty)\\times \\Omega \\times \\partial\\Omega$. The extension to $[1,\\infty)\\times\\bar{\\Omega}\\times\\partial\\Omega$ then readily follows thanks to the continuity of $u$ up to the boundary of $\\Omega$.\n\\end{proof}\n\nThe previous proof does not apply to the case $q=p$ as the term $A^{q-p+1}\\ \\delta^{p-q}$ cannot be made arbitrarily small by a suitable choice of $\\delta$. Still, a similar result is valid for $q=p$ but first requires a change of variable.\n\n\\begin{lemma}\\label{lec4}\nAssume that $q=p$. Then there is $L_1>0$ such that\n$$\n|u(t,x)|=|u(t,x)-u(t,x_0)| \\le \\frac{L_1}{(1+t)^{1\/(p-2)}}\\ |x-x_0|\\,, \\quad (t,x,x_0)\\in [1,\\infty) \\times \\bar{\\Omega} \\times \\partial\\Omega\\,.\n$$\n\\end{lemma}\n\n\\begin{proof}\nWe define $h:=e^{u\/(p-1)}-1$ and notice that\n\\begin{equation}\\label{c7}\n\\frac{u}{p-1} \\le h \\le \\frac{e^{u\/(p-1)}}{p-1}\\ u\\,.\n\\end{equation}\nBy \\eqref{a0a}-\\eqref{a0c} and \\cite[Corollaire~2.1]{Bl94} (or \\cite[Proposition~2.5]{BdCD97}), $h$ is a viscosity solution to\n\\begin{eqnarray}\n\\partial_t \\left[ \\left( \\frac{1+h}{p-1} \\right)^{p-1} \\right] - \\Delta_p h & = & 0 \\;\\;\\mbox{ in }\\;\\; Q\\,, \\label{c8a}\\\\\nh & = & 0 \\;\\;\\mbox{ on }\\;\\; (0,\\infty)\\times\\partial\\Omega\\,, \\label{c8b}\\\\\nh(0) & = & e^{u_0\/(p-1)}-1 \\;\\;\\mbox{ in }\\;\\; \\Omega\\,. \\label{c8c}\n\\end{eqnarray}\n\nWe fix positive real numbers $A$, $M$, and $\\delta$ such that\n\\begin{equation}\\label{c9}\nA := \\max{\\left\\{ 1 , M , \\frac{e C_1}{(p-1) (e-1)}\\ e^{C_1\/(p-1)} \\right\\}}\\,, \\quad M:= \\frac{2^{1\/(p-2)} e^{\\|u_0\\|_\\infty\/(p-1)}}{2^{1\/(p-2)} - 1}\\,,\n\\end{equation}\nand\n\\begin{equation}\\label{c10}\n0< \\delta < \\min{\\left\\{ 1 , \\frac{(p-2) R_\\Omega}{N-1} , \\left( \\frac{p-2}{2 e^{p-1}} \\right)^{1\/p}\\ \\left( \\frac{3}{p-1} \\right)^{-(p-2)\/p} \\right\\}}\\,, \\quad \\Omega_\\delta \\ne \\emptyset\\,,\n\\end{equation}\nthe constant $C_1$ and the set $\\Omega_\\delta$ being defined in Lemma~\\ref{lec2} and \\eqref{c5}, respectively.\n\nWe next consider $t_0\\ge 1$, $x_0\\in\\partial\\Omega$, and let $y_0\\in\\mathbb R^N$ be such that $|x_0-y_0|=R_\\Omega$ and $B(y_0,R_\\Omega)\\cap\\Omega=\\emptyset$, the definition of $R_\\Omega$ and the existence of $y_0$ being stated at the beginning of the proof of Lemma~\\ref{lec3}. We still define the open subset $U_{\\delta,x_0}$ of $\\mathbb R^N$ by\n$$\nU_{\\delta,x_0} := \\{ x\\in \\Omega\\ : R_\\Omega < |x-y_0| < R_\\Omega+\\delta \\}\\,,\n$$\nand the function\n$$\nS_{\\delta,x_0}(t,x) := \\frac{A}{(1+t)^{1\/(p-2)}}\\ \\left( 1 - e^{-(|x-y_0|-R_\\Omega)\/\\delta} \\right) + \\frac{M}{(1+t)^{1\/(p-2)}} - \\frac{M}{(1+t_0)^{1\/(p-2)}}\n$$\nfor $(t,x)\\in [0,t_0]\\times \\overline{U_{\\delta,x_0}}$. Since $y_0\\not\\in \\overline{U_{\\delta,x_0}}$, the function $S_{\\delta,x_0}$ is $C^\\infty$-smooth in $[0,t_0]\\times \\overline{U_{\\delta,x_0}}$. For $(t,x)\\in (0,t_0)\\times U_{\\delta,x_0}$, we set $r:=|x-y_0|-R_\\Omega\\in (0,\\delta)$ and compute\n\\begin{eqnarray*}\n& & \\frac{(1+t)^{(p-1)\/(p-2)}}{A^{p-1}}\\ \\left( \\partial_t \\left[ \\left( \\frac{1+S_{\\delta,x_0}}{p-1} \\right)^{p-1} \\right] - \\Delta_p S_{\\delta,x_0} \\right)(t,x) \\\\\n& = & - \\frac{(1-e^{-r\/\\delta})}{(p-2) (p-1)^{p-2}}\\ \\frac{(1+S_{\\delta,x_0})^{p-2}}{A^{p-2}} - \\frac{M}{(p-2) (p-1)^{p-2}} \\frac{(1+S_{\\delta,x_0})^{p-2}}{A^{p-1}}\\\\\n& & + \\frac{(p-1) e^{-(p-1)r\/\\delta}}{\\delta^p} - \\frac{(N-1) e^{-(p-1)r\/\\delta}}{(r+R_\\Omega) \\delta^{p-1}} \\\\\n& \\ge & \\frac{e^{-(p-1)r\/\\delta}}{\\delta^p}\\ \\left[ p-1- \\frac{N-1}{R_\\Omega}\\ \\delta - \\frac{\\delta^p\\ e^{(p-1)r\/\\delta}}{(p-2) (p-1)^{p-2}}\\ \\left( \\frac{1+2A}{A} \\right)^{p-2} - \\frac{M \\delta^p e^{(p-1)r\/\\delta} (1+2A)^{p-2}}{(p-2) (p-1)^{p-2} A^{p-1}} \\right] \\\\\n& \\ge & \\frac{e^{-(p-1)r\/\\delta}}{\\delta^p}\\ \\left[ 1 - \\frac{2 \\delta^p\\ e^{p-1}}{(p-2)} \\ \\left( \\frac{3}{p-1} \\right)^{p-2} \\right] \\ge 0\\,,\n\\end{eqnarray*}\nthe last two inequalities being a consequence of the choice \\eqref{c9} and \\eqref{c10} of $\\delta$, $A$, and $M$. Therefore, $S_{\\delta,x_0}$ is a supersolution to \\eqref{c8a} in $(0,\\infty)\\times U_{\\delta,x_0}$. Moreover, since $t_0\\ge 1$, we have\n$$\nS_{\\delta,x_0}(0,x) \\ge M - \\frac{M}{2^{1\/(p-2)}} = e^{\\|u_0\\|_\\infty\/(p-1)} \\ge h(0,x)\\,, \\quad x\\in \\overline{U_{\\delta,x_0}}\\,,\n$$\nby \\eqref{c9}. It next follows from \\eqref{c3} and \\eqref{c7} that\n\\begin{equation}\\label{c11}\nh(t,x) \\le \\frac{e^{u(t,x)\/(p-1)}}{p-1}\\ u(t,x)\\le \\frac{C_1 e^{C_1\/(p-1)}}{p-1}\\ (1+t)^{-1\/(p-2)}\\,, \\quad (t,x)\\in [0,\\infty)\\times\\bar{\\Omega}\\,.\n\\end{equation}\nThen, if $(t,x)\\in (0,t_0)\\times \\partial U_{\\delta,x_0}$, either $x\\in\\partial\\Omega$ and $h(t,x)=0\\le S_{\\delta,x_0}(t,x)$. Or $r=|x-y_0|-R_\\Omega=\\delta$ and it follows from \\eqref{c9} and \\eqref{c11} that\n$$\nS_{\\delta,x_0}(t,x) \\ge \\frac{A (1-e^{-1})}{(1+t)^{1\/(p-2)}} \\ge \\frac{C_1 e^{C_1\/(p-1)}}{(p-1 )(1+t)^{1\/(p-2)}} \\ge h(t,x)\\,.\n$$\nWe then deduce from the comparison principle \\cite[Theorem~8.2]{CIL92} that $h(t,x)\\le S_{\\delta,x_0}(t,x)$ for $t\\in [0,t_0]$ and $x\\in\\overline{U_{\\delta,x_0}}$. In particular, owing to \\eqref{c7}, for $t=t_0$,\n$$\n\\frac{u(t_0,x)}{p-1} \\le h(t_0,x) \\le \\frac{A}{(1+t_0)^{1\/(p-2)}}\\ \\left( 1 - e^{-(|x-y_0|-R_\\Omega)\/\\delta} \\right)\\,, \\quad x\\in \\overline{U_{\\delta,x_0}}\\,,\n$$\nand we argue as in the proof of Lemma~\\ref{lec3} to complete the proof.\n\\end{proof}\n\nWe next proceed as in \\cite[Theorem~5]{KK00} to deduce the Lipschitz continuity of $u(t)$ from Lemma~\\ref{lec3} and Lemma~\\ref{lec4}.\n\n\\begin{corollary}\\label{coc5}\nAssume that $q\\in [p-1,p]$. Then there is $L_2>0$ such that\n$$\n|u(t,x)-u(t,y)| \\le \\frac{L_2}{(1+t)^{1\/(p-2)}}\\ |x-y|\\,, \\quad (t,x,y)\\in [1,\\infty) \\times \\bar{\\Omega} \\times \\bar{\\Omega}\\,.\n$$\n\\end{corollary}\n\n\\subsection{Convergence}\\label{sec:cv}\n\nLet $U$ be the solution to the $p$-Laplacian equation with homogeneous Dirichlet boundary conditions\n\\begin{eqnarray}\n\\partial_t U - \\Delta_p U & = & 0\\,, \\qquad (t,x)\\in Q\\,, \\label{x0a}\\\\\nU & = & 0\\,, \\qquad (t,x)\\in (0,\\infty)\\times\\partial\\Omega\\,, \\label{x0b} \\\\\nU(0) & = & u_0\\,, \\qquad x\\in\\Omega\\,. \\label{x0c}\n\\end{eqnarray}\nOwing to the nonnegativity of $|\\nabla u|^q$, the comparison principle \\cite[Theorem~8.2]{CIL92} ensures that\n\\begin{equation}\\label{a4a}\n0 \\le U(t,x) \\le u(t,x)\\,, \\qquad (t,x)\\in [0,\\infty)\\times\\bar{\\Omega}\\,.\n\\end{equation}\n\nWe introduce the scaling variable $s=\\ln{t}$ for $t>0$ and the new unknown functions $v$ and $V$ defined by\n\\begin{eqnarray}\nu(t,x) & = & t^{-1\/(p-2)}\\ v(\\ln{t},x)\\,, \\qquad (t,x)\\in (0,\\infty)\\times\\bar{\\Omega}\\,, \\label{x1} \\\\\nU(t,x) & = & t^{-1\/(p-2)}\\ V(\\ln{t},x)\\,, \\qquad (t,x)\\in (0,\\infty)\\times\\bar{\\Omega}\\,, \\label{x5}\n\\end{eqnarray}\nThen $v$ is a viscosity solution to\n\\begin{eqnarray}\n\\partial_s v - \\Delta_p v - e^{-(q-p+1)s\/(p-2)}\\ |\\nabla v|^q - \\frac{v}{p-2}& = & 0\\,, \\qquad (s,x)\\in Q\\,, \\label{x2}\\\\\nv & = & 0\\,, \\qquad (s,x)\\in (0,\\infty)\\times\\partial\\Omega\\,, \\label{x3}\\,, \\\\\nv(0) & = & u(1)\\,, \\qquad x\\in\\Omega\\,. \\label{x4}\n\\end{eqnarray}\nIn addition, owing to \\eqref{c1} (if $q=p-1$), \\eqref{c3} (if $q>p-1$), Corollary~\\ref{coc5}, and \\eqref{a4a}, we have\n\\begin{eqnarray}\nV(s,x) \\le v(s,x) & \\le & C_1\\,, \\qquad (s,x)\\in [0,\\infty)\\times\\bar{\\Omega}\\,, \\label{x6}\\\\\n|v(s,x)-v(s,y)| & \\le & L_2\\ |x-y|\\,, \\qquad (s,x,y)\\in [0,\\infty)\\times \\bar{\\Omega}\\times \\bar{\\Omega}\\,.\\label{x7}\n\\end{eqnarray}\n\nWe next define for $\\varepsilon\\in (0,1)$\n$$\nw_\\varepsilon(s,x) := v\\left( \\frac{s}{\\varepsilon} , x \\right) \\,, \\qquad (s,x)\\in [0,\\infty)\\times\\bar{\\Omega}\\,,\n$$\nand the half-relaxed limits\n$$\nw_*(x) := \\liminf_{(\\sigma,y,\\varepsilon)\\to (s,x,0)} w_\\varepsilon(\\sigma,y)\\,, \\qquad w^*(x) := \\limsup_{(\\sigma,y,\\varepsilon)\\to (s,x,0)} w_\\varepsilon(\\sigma,y)\\,,\n$$\nfor $(s,x)\\in (0,\\infty)\\times\\bar{\\Omega}$. Observe that $w_*$ and $w^*$ are well-defined according to \\eqref{x6} and indeed do not depend on $s>0$. In addition, it readily follows from \\eqref{x3} and \\eqref{x7} that\n\\begin{equation}\\label{x8}\nw_*(x)=w^*(x)=0\\,, \\quad x\\in\\partial\\Omega\\,.\n\\end{equation}\nAlso, $w_\\varepsilon$ is a solution to\n\\begin{eqnarray}\n& & \\varepsilon\\ \\partial_s w_\\varepsilon - \\Delta_p w_\\varepsilon - e^{-((q-p+1)s)\/((p-2)\\varepsilon)}\\ |\\nabla w_\\varepsilon|^q - \\frac{w_\\varepsilon}{p-2} = 0 \\;\\;\\mbox{ in }\\;\\; Q\\,,\\label{x9a} \\\\\n& & w_\\varepsilon = 0 \\;\\;\\mbox{ on }\\;\\; (0,\\infty)\\times\\partial\\Omega\\,, \\label{x9b} \\\\\n& & w_\\varepsilon(0) = u(1) \\;\\;\\mbox{ in }\\;\\; \\Omega\\,. \\label{x9c}\n\\end{eqnarray}\n\n\\medskip\n\nAt this point, we distinguish the two cases $q=p-1$ and $q\\in (p-1,p]$:\n\n\\medskip\n\n\\noindent\\textbf{Case~1: $q=p-1$.} We use the stability of semicontinuous viscosity solutions \\cite[Lemma~6.1]{CIL92} to deduce from \\eqref{x9a} that\n\\begin{eqnarray}\n& & w_* \\;\\mbox{ is a supersolution to \\eqref{b1} in }\\; \\Omega\\,, \\label{x10} \\\\\n& & w^* \\;\\mbox{ is a subsolution to \\eqref{b1} in }\\; \\Omega\\,. \\label{x11}\n\\end{eqnarray}\nIn addition, as $V(s)\\rightarrow f_0$ in $L^\\infty(\\Omega)$ as\n$s\\to\\infty$ by \\cite[Theorem~1.3]{MV94}, it also follows from\n\\eqref{x6} and the definition of $w_*$ and $w^*$ that\n\\begin{equation}\\label{x12}\nf_0(x) \\le w_*(x) \\le w^*(x) \\le C_1\\,, \\qquad x\\in\\bar{\\Omega}\\,.\n\\end{equation}\nSince $f_0>0$ in $\\Omega$ by \\cite[Theorem~1.1]{MV94}, we deduce from \\eqref{x12} that $w_*$ and $w^*$ are positive and bounded in $\\Omega$ and vanish on $\\partial\\Omega$ by \\eqref{x8}. Owing to \\eqref{x10} and \\eqref{x11}, we are then in a position to apply Lemma~\\ref{leb1} to conclude that $w^*\\le w_*$ in $\\bar{\\Omega}$. Recalling \\eqref{x12}, we have thus shown that $w_*=w^*$ in $\\bar{\\Omega}$. Setting $f:=w_*=w^*$, we infer from \\eqref{x8}, \\eqref{x10}, \\eqref{x11}, and \\eqref{x12} that $f\\in C_0(\\bar{\\Omega})$ is a positive viscosity solution to \\eqref{b1} so that it solves \\eqref{a6}. We have thus proved the existence of a positive solution to \\eqref{a6}, its uniqueness being granted by Corollary~\\ref{cob2}. A further consequence of the equality $w_*=w^*$ is that $\\|w_\\varepsilon(1)-f\\|_\\infty\\rightarrow 0$ as $\\varepsilon\\to 0$ (see, e.g., \\cite[Lemme~4.1]{Bl94} or \\cite[Lemma~5.1.9]{BdCD97}). In other words,\n\\begin{equation}\\label{x13}\n\\lim_{s\\to\\infty} \\|v(s)-f\\|_\\infty=0\\,,\n\\end{equation}\nwhich implies \\eqref{a5} once written in terms of $u$.\n\nFinally, a straightforward consequence of \\eqref{x7} and \\eqref{x13} is that $|f(x)-f(y)|\\le L_2\\ |x-y|$ for $(x,y)\\in \\bar{\\Omega}\\times\\bar{\\Omega}$. Consequently, $f$ is Lipschitz continuous in $\\bar{\\Omega}$ which completes the proof of Theorem~\\ref{tha1}.\n\n\\medskip\n\n\\noindent\\textbf{Case~2: $q\\in (p-1,p]$.} We use once more the stability of semicontinuous viscosity solutions \\cite[Lemma~6.1]{CIL92} to deduce from \\eqref{x9a} that\n\\begin{eqnarray}\n& & w_* \\;\\mbox{ is a supersolution to \\eqref{b10} in }\\; \\Omega\\,, \\label{x14} \\\\\n& & w^* \\;\\mbox{ is a subsolution to \\eqref{b10} in }\\; \\Omega\\,. \\label{x15}\n\\end{eqnarray}\nIn addition, as $V(s)\\rightarrow f_0$ in $L^\\infty(\\Omega)$ as $s\\to\\infty$ by \\cite[Theorem~1.3]{MV94}, it also follows from \\eqref{x6} and the definition of $w_*$ and $w^*$ that\n\\begin{equation}\\label{x16}\nf_0(x) \\le w_*(x) \\le w^*(x) \\le C_1\\,, \\qquad x\\in\\bar{\\Omega}\\,.\n\\end{equation}\nSince $f_0>0$ in $\\Omega$ by \\cite[Theorem~1.1]{MV94} and a solution to \\eqref{b10}, we apply Lemma~\\ref{leb3} to conclude that $w^*\\le f_0$ in $\\bar{\\Omega}$. Recalling \\eqref{x16}, we have proved that $w_*=w^*=f_0$ in $\\bar{\\Omega}$. We then complete the proof of Theorem~\\ref{tha2} for $q\\in (p-1,p]$ in the same way as that of Theorem~\\ref{tha1}.\n\n\\subsection{Improved upper bounds}\\label{sec:iub}\n\nInterestingly, the positive solution $f$ to \\eqref{a6} can be also used to construct supersolutions to \\eqref{a0a}-\\eqref{a0b} for $q>p-1$. We first consider the case $q\\in (p-1,p]$ and postpone the case $q>p$ to Section~\\ref{sec:wp2} where it is a crucial argument for the global existence of solutions.\n\n\\begin{proposition}\\label{pre1}\nAssume that $q\\in (p-1,p]$ and there are $\\beta\\in (0,1]$ and $B>0$ such that\n\\begin{equation}\\label{e1}\nu_0(x) \\le B\\ f(x)^\\beta\\,, \\qquad x\\in\\bar{\\Omega}\\,.\n\\end{equation}\nThen there is $\\gamma\\in (0,\\beta]$ such that\n\\begin{equation}\\label{e2}\nu(t,x) \\le \\frac{\\|f\\|_\\infty^{1-\\gamma}}{\\gamma \\left( \\|f\\|_\\infty^{p-2} + \\gamma t \\right)^{1\/(p-2)}}\\ f(x)^\\gamma \\le \\frac{f(x)^\\gamma}{\\gamma \\|f\\|_\\infty^\\gamma}\\,, \\qquad (t,x)\\in [0,\\infty)\\times \\bar{\\Omega}\\,.\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\nWe fix $\\gamma\\in (0,1)$ such\n\\begin{equation}\\label{e3}\n\\gamma \\le \\min{\\left\\{ \\frac{p-2}{p-1} , \\beta , \\frac{1}{B \\|f\\|_\\infty^\\beta} \\right\\}}\\,,\n\\end{equation}\nand, for $(t,x)\\in [0,\\infty)\\times\\bar{\\Omega}$, we define\n$$\n\\Sigma(t,x) = \\frac{A f(x)^\\gamma}{\\gamma (1+\\delta t)^{1\/(p-2)}} \\;\\;\\mbox{ with }\\;\\; A := \\frac{1}{\\|f\\|_\\infty^\\gamma}\\;\\;\\mbox{ and }\\;\\; \\delta = \\frac{\\gamma}{\\|f\\|_\\infty^{p-2}}\\,.\n$$\nWe claim that\n\\begin{equation}\\label{e4}\n\\Sigma \\;\\mbox{ is a supersolution to \\eqref{a0a} in }\\; Q \\;\\mbox{ for }\\; q\\in [p-1,p]\\,.\n\\end{equation}\nIndeed, let $\\phi\\in C^2(Q)$ and consider $(t_0,x_0)\\in Q$ where $\\Sigma-\\phi$ has a local minimum. Since $\\Sigma$ is smooth with respect to the time variable, this property implies that\n\\begin{equation}\\label{e5}\n\\partial_t \\phi(t_0,x_0) = - \\frac{\\delta A}{\\gamma (p-2)}\\ \\frac{f(x_0)^\\gamma}{\\left( 1+\\delta t_0 \\right)^{(p-1)\/(p-2)}}\\,,\n\\end{equation}\nand that $x\\mapsto \\Sigma(t_0,x)-\\phi(t_0,x)$ has a local minimum at $x_0$. In other words, the function\n$x\\mapsto f(x)^\\gamma - \\gamma\\left( 1+\\delta t_0 \\right)^{1\/(p-2)}\\ \\phi(t_0,x)\/A$ has a local minimum at $x_0$ and we infer from \\eqref{a6}, the positivity of $f$ in $\\Omega$, and \\cite[Corollaire~2.1]{Bl94} (or \\cite[Proposition~2.5]{BdCD97}) that $g:=f^\\gamma$ is a viscosity solution to\n$$\n-\\Delta_p g - \\frac{(1-\\gamma)(p-1)}{\\gamma}\\ \\frac{|\\nabla g|^p}{g} - |\\nabla g|^{p-1} - \\frac{\\gamma^{p-1}}{p-2}\\ g^{(1-(1-\\gamma)(p-1))\/\\gamma} = 0 \\;\\;\\mbox{ in }\\;\\;\\Omega\\,.\n$$\nConsequently,\n\\begin{eqnarray*}\n& & - \\frac{\\gamma^{p-1}}{A^{p-1}}\\ \\left( 1+\\delta t_0 \\right)^{(p-1)\/(p-2)}\\ \\Delta_p\\phi(t_0,x_0) - \\frac{(1-\\gamma)(p-1)\\gamma^{p-1}}{A^p}\\ (1+\\delta t_0)^{p\/(p-2)} \\frac{|\\nabla\\phi(t_0,x_0)|^p}{f(x_0)^\\gamma} \\\\\n& & - \\frac{\\gamma^{p-1}}{A^{p-1}}\\ \\left( 1+\\delta t_0 \\right)^{(p-1)\/(p-2)}\\ |\\nabla\\phi(t_0,x_0)|^{p-1} - \\frac{\\gamma^{p-1}}{p-2}\\ f(x_0)^{1-(1-\\gamma)(p-1)} \\ge 0\\,,\n\\end{eqnarray*}\nfrom which we deduce, since $\\gamma\\in (0,1)$,\n\\begin{eqnarray}\n- \\Delta_p\\phi(t_0,x_0) & \\ge & \\frac{(1-\\gamma)(p-1)}{A}\\ (1+\\delta t_0)^{1\/(p-2)} \\frac{|\\nabla\\phi(t_0,x_0)|^p}{f(x_0)^\\gamma} \\label{e6} \\\\\n& + & |\\nabla\\phi(t_0,x_0)|^{p-1} + \\frac{A^{p-1}}{p-2}\\ \\frac{f(x_0)^{1-(1-\\gamma)(p-1)}}{\\left( 1+\\delta t_0 \\right)^{(p-1)\/(p-2)}} \\,. \\nonumber\n\\end{eqnarray}\nBy \\eqref{e5} and \\eqref{e6}, we have\n\\begin{equation}\\label{e7}\n\\partial_t\\phi(t_0,x_0) - \\Delta_p\\phi(t_0,x_0) - |\\nabla\\phi(t_0,x_0)|^q \\ge \\frac{|\\nabla\\phi(t_0,x_0)|^{p-1}}{f(x_0)^\\gamma}\\ R_1 + \\frac{A^{p-1} f(x_0)^{1-(1-\\gamma)(p-1)}}{\\left( 1+\\delta t_0 \\right)^{(p-1)\/(p-2)}}\\ \\frac{R_2}{p-2}\\,,\n\\end{equation}\nwith\n\\begin{eqnarray*}\nR_1 & := & \\frac{(1-\\gamma)(p-1)}{A}\\ (1+\\delta t_0)^{1\/(p-2)} |\\nabla\\phi(t_0,x_0)| + f(x_0)^\\gamma - f(x_0)^\\gamma\\ |\\nabla\\phi(t_0,x_0)|^{q-p+1}\\,, \\\\\nR_2 & := & 1 - \\frac{\\delta}{\\gamma A^{p-2}}\\ f(x_0)^{(1-\\gamma)(p-2)}\\,.\n\\end{eqnarray*}\nOn the one hand, \\eqref{e3} guarantees that $(1-\\gamma)(p-1)\\ge 1$ which, together with Young's inequality and the assumption $q\\in (p-1,p]$, leads us to\n$$\nR_1 \\ge \\|f\\|_\\infty^\\gamma\\ |\\nabla\\phi(t_0,x_0)| + f(x_0)^\\gamma - (q-p+1)\\ f(x_0)^\\gamma\\ |\\nabla\\phi(t_0,x_0)| - (p-q)\\ f(x_0)^\\gamma \\ge 0\\,.\n$$\nOn the other hand, the choice of $A$ and $\\delta$ gives\n$$\nR_2 = 1 - \\left( \\frac{f(x_0)}{\\|f\\|_\\infty} \\right)^{(1-\\gamma)(p-2)} \\ge 0\\,.\n$$\nCombining the previous two inequalities with \\eqref{e7} completes the proof of the claim \\eqref{e4}.\n\nNow, $u=\\Sigma=0$ on $(0,\\infty)\\times\\partial\\Omega$ while, since $\\beta\\ge\\gamma$, we infer from \\eqref{e3} and the choice of $A$ that, for $x\\in\\bar{\\Omega}$,\n$$\nu_0(x) \\le B\\ f(x)^\\beta = \\frac{A f(x)^\\gamma}{\\gamma}\\ \\frac{\\gamma B f(x)^{\\beta-\\gamma}}{A} \\le \\Sigma(0,x)\\ \\frac{\\gamma B \\|f\\|_\\infty^{\\beta-\\gamma}}{A} \\le \\Sigma(0,x)\\,.\n$$\nWe then deduce from the comparison principle \\cite[Theorem~8.2]{CIL92} that $u(t,x)\\le\\Sigma(t,x)$ for $(t,x)\\in [0,\\infty)\\times\\bar{\\Omega}$ and the proof of Proposition~\\ref{pre1} is complete.\n\\end{proof}\n\n\\mysection{Well-posedness and blowup: $q>p$}\\label{sec:wpbu}\n\n\\subsection{Well-posedness}\\label{sec:wp2}\n\nWe finally turn to the case $q>p$ and first show that a suitable multiple of the positive solution $f$ to \\eqref{a6} allows us to construct a supersolution \\eqref{a0a} when $q>p$ which vanishes identically on the boundary of $\\Omega$.\n\n\\begin{lemma}\\label{lef1}\nAssume that $q>p-1$. Recalling that $f\\in C_0(\\bar{\\Omega})$ is the unique positive solution to \\eqref{a6}, the function\n$$\n\\mathcal{F}(t,x) := \\frac{f(x)}{\\left( \\|\\nabla f\\|_\\infty^{p-2} + t \\right)^{1\/(p-2)}}\\,, \\qquad (t,x)\\in [0,\\infty)\\times\\bar{\\Omega}\\,,\n$$\nis a supersolution to \\eqref{a0a} in $Q$.\n\\end{lemma}\n\n\\begin{proof}\nLet $\\phi\\in C^2(Q)$ and consider $(t_0,x_0)\\in Q$ where $\\mathcal{F}-\\phi$ has a local minimum. Since $\\mathcal{F}$ is smooth with respect to the time variable and Lipschitz continuous with respect to the space variable, this property implies that\n\\begin{eqnarray}\n\\partial_t \\phi(t_0,x_0) & = & - \\frac{1}{p-2}\\ \\frac{f(x_0)}{\\left( \\|\\nabla f\\|_\\infty^{p-2} + t_0 \\right)^{(p-1)\/(p-2)}}\\,, \\label{f1}\\\\\n|\\nabla\\phi(t_0,x_0)| & \\le & \\frac{\\|\\nabla f\\|_\\infty}{\\left( \\|\\nabla f\\|_\\infty^{p-2} + t \\right)^{1\/(p-2)}} \\le 1\\,, \\label{f2}\n\\end{eqnarray}\nand that $x\\mapsto \\mathcal{F}(t_0,x)-\\phi(t_0,x)$ has a local minimum at $x_0$. In other words, the function\n$x\\mapsto f(x) - \\left( \\|\\nabla f\\|_\\infty^{p-2} + t_0 \\right)^{1\/(p-2)}\\ \\phi(t_0,x)$ has a local minimum at $x_0$ and we infer from \\eqref{a6} that\n$$\n- \\left( \\|\\nabla f\\|_\\infty^{p-2} + t_0 \\right)^{(p-1)\/(p-2)}\\ \\Delta_p\\phi(t_0,x_0) - \\left( \\|\\nabla f\\|_\\infty^{p-2} + t_0 \\right)^{(p-1)\/(p-2)}\\ |\\nabla\\phi(t_0,x_0)|^{p-1} - \\frac{f(x_0)}{p-2} \\ge 0\\,,\n$$\nwhich, together with \\eqref{f1}, gives\n\\begin{equation}\\label{f3}\n\\partial_t \\phi(t_0,x_0) - \\Delta_p \\phi(t_0,x_0) - |\\nabla\\phi (t_0,x_0)|^{p-1} \\ge 0\\,.\n\\end{equation}\nWe then infer from \\eqref{f2}, \\eqref{f3}, and the property $q>p-1$ that\n$$\n\\partial_t \\phi(t_0,x_0) - \\Delta_p \\phi(t_0,x_0) - |\\nabla\\phi (t_0,x_0)|^q \\ge |\\nabla\\phi (t_0,x_0)|^{p-1} \\left( 1 - |\\nabla\\phi (t_0,x_0)|^{q-p+1} \\right) \\ge 0\\,,\n$$\nwhich completes the proof of Lemma~\\ref{lef1}.\n\\end{proof}\n\n\\begin{proposition}\\label{prf2}\nAssume that $q>p$ and\n\\begin{equation}\\label{f4}\nu_0(x) \\le \\frac{f(x)}{\\|\\nabla f\\|_\\infty}\\,, \\qquad x\\in\\bar{\\Omega}\\,.\n\\end{equation}\nThen there is a unique solution $u$ to \\eqref{a0a}-\\eqref{a0c} in the sense of Definition~\\ref{defa0} and it satisfies\n\\begin{equation}\\label{e2a}\nu(t,x) \\le \\frac{f(x)}{\\left( \\|\\nabla f\\|_\\infty^{p-2} + t \\right)^{1\/(p-2)}} \\le \\frac{f(x)}{\\|\\nabla f\\|_\\infty}\\,, \\qquad (t,x)\\in [0,\\infty)\\times \\bar{\\Omega}\\,.\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\nOn the one hand, the solution $U$ to the $p$-Laplacian equation \\eqref{x0a}-\\eqref{x0c} is clearly a subsolution to \\eqref{a0a} in $Q$. On the other hand, the function $\\mathcal{F}$ defined in Lemma~\\ref{lef1} is a supersolution to \\eqref{a0a} in $Q$ by Lemma~\\ref{lef1} and is thus also a supersolution to \\eqref{x0a}. Since $U=\\mathcal{F}=0$ on $(0,\\infty)\\times\\partial\\Omega$ and $U(0,x)=u_0(x)\\le \\mathcal{F}(0,x)$ for $x\\in\\bar{\\Omega}$ by \\eqref{f4}, the comparison principle \\cite[Theorem~8.2]{CIL92} applied to the $p$-Laplacian equation \\eqref{x0a} ensures that $U\\le \\mathcal{F}$ in $[0,\\infty)\\times\\bar{\\Omega}$. This property and the simultaneous vanishing of $U$ and $\\mathcal{F}$ on $(0,\\infty)\\times\\partial\\Omega$ allow us to use the classical Perron method to establish the existence of a solution $u$ to \\eqref{a0a}-\\eqref{a0c} in the sense of Definition~\\ref{defa0} which satisfies \\eqref{e2a}. The uniqueness next follows from the comparison principle \\cite[Theorem~8.2]{CIL92}.\n\\end{proof}\n\n\\subsection{Large time behaviour}\\label{sec:ltb2}\n\nWe first recall that Lemma~\\ref{lec2} is also valid in that case. It next readily follows from the Lipschitz continuity of $f$ and \\eqref{e2a} that\n$$\n0 \\le u(t,x) = u(t,x) - u(t,x_0) \\le \\frac{\\|\\nabla f\\|_\\infty}{(\\|\\nabla f\\|_\\infty^{p-2} + t)^{1\/(p-2)}}\\ |x-x_0|\\,, \\quad (t,x,x_0)\\in [0,\\infty)\\times \\bar{\\Omega}\\times \\partial\\Omega\\,,\n$$\nand we proceed as in \\cite[Theorem~5]{KK00} to show that Corollary~\\ref{coc5} remains true (with a different constant $L_2$). The convergence proof is then the same as that performed in Section~\\ref{sec:cv} for $q\\in (p-1,p]$.\n\n\\subsection{Blowup}\\label{sec:bu}\n\nLet us first recall that, by a weak solution to \\eqref{a0a}-\\eqref{a0c}, we mean a nonnegative function $u\\in C([0,\\infty)\\times\\bar{\\Omega})$ which belongs to $L^\\infty(0,T; W^{1,\\infty}(\\Omega))$ and satisfies\n\\begin{equation}\\label{z1}\n\\frac{d}{dt} \\int_\\Omega u(t,x)\\ \\psi(x)\\ dx = \\int_\\Omega \\left( - |\\nabla u(t,x)|^{p-2}\\ \\nabla u(t,x) \\cdot \\nabla\\psi(x) + |\\nabla u(t,x)|^q\\ \\psi(x) \\right)\\ dx\n\\end{equation}\nfor any $\\psi\\in H^1_0(\\Omega)$ and $T>0$. We now show that such a solution cannot exist for all times if $q>p$ and $u_0$ is sufficiently large.\n\n\\medskip\n\n\\begin{proposition}\\label{prz1}\nAssume that $q>p$ and define $r:=q\/(q-p)$. There is a positive real number $\\kappa$ depending on $\\Omega$, $p$, and $q$ such that, if $\\|u_0\\|_{r+1}>\\kappa$, then \\eqref{a0a}-\\eqref{a0c} has no global weak solution.\n\\end{proposition}\n\n\\begin{proof}\nWe argue as in \\cite[Theorem~2.4]{HM04} and use classical approximation arguments to deduce from \\eqref{z1} and H\\\"older's and Young's inequalities that\n\\begin{eqnarray*}\n\\frac{1}{r+1}\\ \\frac{d}{dt}\\|u\\|_{r+1}^{r+1} & = & \\int_\\Omega u^r\\ |\\nabla u|^q\\ dx - \\frac{q}{q-p}\\ \\int_\\Omega u^{r-1}\\ |\\nabla u|^p\\ dx \\\\\n& \\ge & \\int_\\Omega u^r\\ |\\nabla u|^q\\ dx - \\frac{q}{q-p}\\ |\\Omega|^{(q-p)\/q}\\ \\left( \\int_\\Omega u^r\\ |\\nabla u|^q\\ dx \\right)^{p\/q} \\\\\n& \\ge & \\int_\\Omega u^r\\ |\\nabla u|^q\\ dx - \\frac{p}{q}\\ \\int_\\Omega u^r\\ |\\nabla u|^q\\ dx - \\left( \\frac{q}{q-p} \\right)^{p\/(q-p)}\\ |\\Omega| \\\\\n& \\ge & \\frac{q-p}{q}\\ \\int_\\Omega u^r\\ |\\nabla u|^q\\ dx - \\left( \\frac{q}{q-p} \\right)^{p\/(q-p)}\\ |\\Omega| \\\\\n& \\ge & \\frac{q-p}{q}\\ \\left( \\frac{q-p}{q-p+1} \\right)^q\\\n\\int_\\Omega \\left| \\nabla\\left( u^{(q-p+1)\/(q-p)} \\right)\n\\right|^q\\ dx - \\left( \\frac{q}{q-p} \\right)^{p\/(q-p)}\\ |\\Omega|\n\\end{eqnarray*}\nWe now use the Poincar\\'e inequality to obtain that\n$$\n\\frac{1}{r+1}\\ \\frac{d}{dt}\\|u\\|_{r+1}^{r+1} \\ge \\kappa_1\\ \\int_\\Omega u^{r+q}\\ dx -\\kappa_2\n$$\nfor some constants $\\kappa_1>0$ and $\\kappa_2>0$ depending only on $\\Omega$, $p$, and $q$. Since $q>1$, we use again H\\\"older's inequality to deduce\n$$\n\\frac{1}{r+1}\\ \\frac{d}{dt}\\|u\\|_{r+1}^{r+1} \\ge\n\\frac{\\kappa_1}{|\\Omega|^{(q-1)\/(r+q)}}\\ \\|u\\|_{r+1}^{r+q} -\n\\kappa_2\\,.\n$$\nSince $q>1$, this clearly contradicts the global existence as soon as $\\|u_0\\|_{r+1}$ is sufficiently large.\n\\end{proof}\n\n\\subsection*{Acknowledgements}\nThe authors would like to thank Matteo Bonforte and Michael Winkler for helpful discussions and comments. This work was done during a visit of Ph.~Lauren\\c{c}ot to the Fakult\\\"{a}t f\\\"{u}r Mathematik of the Universit\\\"{a}t Duisburg-Essen and while C.~Stinner held a one month invited position at the Institut de Math\\'{e}matiques de Toulouse, Universit\\'{e} Paul Sabatier - Toulouse III. We would like to express our gratitude for the invitation, support, and hospitality.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\n\n\\section{Introduction}\n\n\nThe use and importance of Artificial Intelligence (AI) systems and, in particular, Machine Learning (ML) models, has increased in many industrial sectors (i.e. finance, healthcare, hiring, transportation, etc.), with the purpose, among others, to automate decision-making processes. The majority of these systems have a direct or indirect impact on people's life. Besides the benefits, many ethical concerns have recently emerged with the widespread use of such systems: bias amplification, data privacy, lack of transparency, human oversight, accountability, etc.~\\citep{pekka2018european}. This is putting increasing pressure on developers and service providers to supply explanations of the models and in particular of their outcomes. The European Commission has recently published a proposal for what is going to be the first attempt ever to insert AI systems and their use in a coherent legal framework~\\citep{EUproposal}: the proposal devotes a significant attention to the importance of \\emph{transparency} of AI systems.\n\n\nTo enhance transparency and trust in AI systems, Explainable Artificial Intelligence (XAI) has become increasingly important~\\citep{adadi2018peeking}. In a nutshell, XAI aims to provide information to explain and justify automated results and thus to give the tools to understand the AI system behaviour. The ultimate goals of XAI are many, among which the prevention of possible harms to end users and also the possibility of gaining useful insights to improve the system itself. Therefore, XAI involves the ability to explain both the technical aspects, pertaining to modeling, and the related human decisions. This entails to consider different target audiences of the proposed explanations, such as data scientists, developers, executives, regulatory entities and end users affected by decisions, among others~\\citep{arrieta2020explainable}. Against this background, a growing number of methods and approaches have appeared, depending on the type of problem faced and the stakeholder considered (see, e.g., \\cite{adadi2018peeking, arrieta2020explainable}).\n\nExplainabilty in Machine Learning (ML) is usually faced either by employing simple and thus intrinsically understandable models, such as linear or logistic regressions, rule-based methods, decision trees, etc., or by using appropriate tools to generate explanations on trained models.\nThese tools are usually categorized as local or global methods, and as model-specific or model-agnostic solutions~\\citep{molnar2019, guidotti2018survey}.\nWhile local methods aim to provide explanations for a single instance and its given outcome, global methods aims to explain the overall behavior of the model. An example of a global method is that of approximating a black-box model with an intrinsically explainable one.\nModel-specific approaches are tools that can be applied only to some classes of models (e.g. looking at weights of a linear regression), while model-agnostic methods can be used to explain any black-box model.\nLIME~\\citep{lime} and SHAP~\\citep{shap} are two of the most popular model-agnostic local explainers, and they are part of the broader family of methods based on feature contribution, together with other well-known approaches such as Partial Dependence Plots~\\citep{pdp}, Accumulated Local Effects~\\citep{ale}, Individual Conditional Expectation~\\citep{ice}.\nOther methods are based on local rule extraction, such as Anchors~\\citep{anchors} and LORE~\\citep{lore}.\nThese model-agnostic approaches provide explanations by trying to locally approximate the black-box model. One the other hand, example-based methods explain local instances by computing other points in the feature space --- the \\emph{examples} --- with some desired characteristics, such as representing the typical point belonging to some class (prototypes and criticism~\\citep{kim2016examples, gurumoorthy2019efficient}), or being similar to the original point but with enough changes to be given a different outcome (adversarial examples --- see \\cite{molnar2019} --- and counterfactual explanations~\\citep{wachter2017counterfactual}). \n\n\n\n\n\nCounterfactual explanations, first proposed by~\\cite{wachter2017counterfactual}, are becoming one of the most popular approaches to explainability in ML within technical, legal and business circles~\\citep{barocas2020hidden}. They are local example-based and mostly model-agnostic explanations that construct a set of statements to communicate to end users what could be changed from an original input to receive an alternative decision. Unlike other explainability techniques, this approach imposes no constraints on model complexity, avoids the disclosure of technical details (protecting trade secrets), provides precise suggestions for achieving a desired outcome and appears to produce explanations that comply with requirements of note-worthy governmental initiatives~\\citep{barocas2020hidden}. \n\nThe volume of research on counterfactual explanations is growing and different solutions have been proposed (refer to \\cite{verma2020counterfactual, stepin2021survey} for surveys on counterfactual explanations methods). While these efforts are significant, generally they fall short of generating feasible actions that end users should carry out~\\citep{barocas2020hidden}, which is the focus of this work. Obviously, the concept of causality is a key element if we want to find counterfactual explanations that guide end users to act and not only understand the output of a model~\\citep{chari2020directions}. Causal methods can effectively represent cause-effect relationships among variables, thus going in the direction of disentangling the causal effects on the entire system due to direct changes in some of the variables. \n\n\nIn this paper we present CEILS: Counterfactual Explanations as Interventions in Latent Space, a method to generate counterfactual explanations capturing by design the underlying causal relations from the data. \nThis methodology is based on the idea of employing existing counterfactual explanations generators on a latent space that represent the residual degrees of freedom once the causal structure of the problem is taken into account.\nWe demonstrate the effectiveness of our approach with a set of different experiments using synthetic and real datasets (including a proprietary dataset of the financial domain). We evaluate the explanations using a large set of metrics, trying to quantify and pinpoint the key aspects of our proposal. This evaluation is a useful precursor to user studies, where interactions with users and feedback are employed to guide towards the best explanations. \n\n\nThe paper is organised as follows. Section~\\ref{sec:background} covers the existing background and related work on counterfactual explanations. Section~\\ref{sec:solution} details our proposed methodology, whose main advantages with respect to prior works are highlighted in section~\\ref{sec:recourse}. Section~\\ref{sec:experiments} is devoted to experiment results to evaluate our method, including a detailed definition of the metrics and a discussion of the major findings. Finally, section~\\ref{sec:conclusions} concludes the paper by summarizing the proposed method and presenting its main limitations and possible directions of improvement.\n\n\n\n\n\\section{Related work \\label{sec:background}}\n\n\n\\subsection{Related concepts}\n\nSeveral governmental initiatives towards explainable AI, such as the General Data Protection Regulation (GDPR) in the European Union~\\citep{gdpr} and the Defence Advanced Research Projects Agency (DARPA) XAI program of the United States~\\citep{darpa}, as well as the already mentioned European Commission proposal for legislation on AI systems~\\citep{EUproposal}, endeavour to promote the creation of explanations that can be understood and appropriately trusted by end users. With the goal of approaching user-centric explanations in AI, researchers can use findings from previous research on social science, wherein contrastive and counterfactual explanations are claimed to be inherent to human cognition~\\citep{miller2019explanation}.\n\nIn the field of XAI, there seem to be an overlap in the concepts of \\emph{contrastive} and \\emph{counterfactual} explanations~\\citep{stepin2021survey}. An explanation is \\emph{contrastive} when it does not describe the reason for an event to happen (\\textit{\"Why P?\"}), but seek for the reason of an event to occur \\emph{relative to another} that did not (\\textit{\"Why P rather than Q?\"})~\\citep{miller2019explanation}. Counterfactual explanations are defined as a set of statements constructed to communicate what could be changed in the original profile to get a \\emph{different} outcome by the decision-making process~\\citep{wachter2017counterfactual}. Therefore, counterfactual explanations are normally considered contrastive by nature and give a source of valuable complementary information \\citep{byrne2019counterfactuals}. Indeed, people usually do not ask why a certain prediction was made, but why this prediction was made \\emph{instead of another prediction}: therefore, one of the usual requirements for a ``good'' explanation is precisely to be contrastive~\\citep{lipton1990contrastive, molnar2019}. Notice that counterfactual explanations have the additional characteristic of representing a conditional clause (\\textit{\"If X were to occur, then Y would (or might) occur\"})~\\citep{stepin2021survey}, thus adding a ``causality layer'' on the contrastive statement. Indeed both the work of \\cite{karimi2020algorithmic} and our proposed methodology present a bridge between ``counterfactuals'' as intended by causal inference frameworks~\\citep{pearl2016causal, spirtes2000causation} and ``counterfactual explanations'' that are usually \\emph{not} embedded in formal causal theory frameworks.\n\n \n\nCounterfactual explanations are strictly connected to, but different from, \\emph{algorithmic recourse}. While the former, as the name suggests, provides an \\emph{explanation} of a specific model outcome (by means of showing a scenario as close as possible to the original but reaching a different outcome), the latter provides \\emph{recommendations} of what action to undertake in order to gain a different outcome. In layman's terms, counterfactual explanations inform an\nindividual where they need to get to, but not how to get there \\citep{karimi2021algorithmic}.\nRephrasing \\cite{karimi2021survey}, both can be cast in a counterfactual form by asking the following questions: \n\\begin{itemize}\n \\item \\emph{explanation}: what profile would have led to receiving a different outcome?\n \\item \\emph{recourse}: what actions would have led me to reach such profile?\n\\end{itemize} \nAlgorithmic recourse refers, in fact, to the set of actions that an individual should perform in order to reach the desired outcome~\\citep{joshi2019towards, venkatasubramanian2020philosophical}. Notice that the second question somehow incorporates the first one. In other terms, algorithmic recourse is a broad concept, which contains both the counterfactual explanation and the recommendations on how to reach it. \n\nTo address the challenge of algorithmic recourse, it is important do distinguish the variables in terms of their level of ``actionability'' \\citep{karimi2020algorithmic}: there are variables that cannot change (e.g. race, sex, date of birth), variables that can change but cannot be directly controlled by the individual (e.g. credit score), and variables that can --- at least in principle --- be directly acted upon (e.g. bank balance, income, education). \n\nThe aforementioned difference between explanations and recourse may seem only a matter of terminology, and indeed in the majority of the literature on counterfactual explanations it is understood that, given a counterfactual observation, it is straightforward to find the set of actions necessary to reach it by simply taking the difference of the two feature vectors~\\citep{barocas2020hidden}. But this is true only under very stringent assumptions, that are outlined in~\\cite{karimi2020algorithmic2, karimi2021survey} and that will be made clearer in section~\\ref{sec:recourse}. \n\n\\subsection{Generation of explanations}\n\nSince the first proposal of counterfactual explanations by~\\cite{wachter2017counterfactual}, a large body of research concerning different algorithms and techniques to generate contrastive and counterfactual explanations have been conducted~\\citep{verma2020counterfactual, stepin2021survey}. Most of generation techniques relies on establishing an optimization problem to find the \\emph{nearest counterfactual} in the space of features, with respect to the observation to be explained~\\citep{wachter2017counterfactual, mace, mohammadi2020scaling}. The metric used to define the distance to be minimized is sometimes referred to as \\emph{proximity}. \nMoreover, several additional proposals have been put forward to achieve desirable explanatory properties, such as keeping a low number of feature changes (\\emph{sparsity})~\\citep{dice} or possibly producing more than one counterfactual explanation per each observation, as \\emph{diverse} as possible among each other~\\citep{dice, mace}. \\cite{alibi} propose the use of \\emph{prototypes}~\\citep{kim2016examples, gurumoorthy2019efficient} to guide the optimization process, with a twofold goal: to find counterfactuals that are ``as close as possible'' to the distribution of the observed dataset; to speed up the optimization search. \\cite{dhurandhar2018explanations} propose the use of autoencoders trained on the given data to provide explanation that are near the data manifold.\n\nSome works have put forward proposals to embed causality aspects into the counterfactual generation process. For example, \\cite{mahajan2019preserving} introduce a ``causal proximity'' loss term by which the counterfactual explanation is ``pushed'' towards regions in which the underlying causality among features is preserved. \\cite{karimi2020algorithmic} build on the optimization framework described in \\cite{mace} and introduce causality by computing counterfactuals through abduction-action-prediction steps as prescribed by~\\cite{pearl2016causal}. \\cite{karimi2020algorithmic2} focus on the case in which only limited causal knowledge is available, thus propose probabilistic approaches (via Gaussian processes or conditional average treatment effect) that allow to relax the assumption of having access to full structural equations.\n\n\nThe counterfactual generation process is usually expressed as an optimization problem, either constrained or unconstrained (see section \\ref{sec:prob_setting} for details) that has been faced with various strategies: gradient-based methods~\\citep{wachter2017counterfactual, alibi, dice}; genetic-based algorithms~\\citep{certifai, lore}; graph-based shortest path algorithms~\\citep{face}; by building on formal verification tools and satisfiability modulo theories (SMT) solvers \\citep{mace}.\n\n\n\nFinally, the literature takes into account other aspects as well: \\cite{mahajan2019preserving} report the evaluation of counterfactual explanations with respect to the \\textit{computational efficiency} and the amount of time necessary to generate the explanations, which is indeed one of the challenges in this field~\\citep{verma2020counterfactual}. \\cite{binns2018s} and \\cite{fernandez2020explaining} evaluate counterfactual explanations in comparison with other XAI approaches, such as feature importance. \\cite{miller2019explanation} reviews relevant papers from disciplines such as philosophy, cognitive science and social psychology, to draw some findings that can be applied to AI.\n\n\n\n\n\\section{Methodology \\label{sec:solution}}\n\n\n\\subsection{Problem setting}\\label{sec:prob_setting}\n\n\nConsider having a ML classifier $\\mathcal{C}$ trying to estimate the relationship between a binary target random variable $Y \\in \\{0, 1\\}$ and predictors $X = (X_1, \\ldots, X_d)$.\nTypically, one has\n\\begin{equation}\n \\hat{Y} = \\mathcal{C}(x) = \\bm{1}_{\\{R(x) > t\\}},\n\\label{eq:M}\n\\end{equation}\nwhere $x\\in\\mathcal{X}$ is a specific realization of $X$ --- namely, an observation; $R(x)$ is usually referred to as \\emph{score} and is learned to estimate $P(Y = 1\\ \\rvert\\ X=x)$; while $t$ is the threshold above which we assign the positive outcome to the observation $x$.\n\nGiven an instance $x^0$ we want to find a \\emph{counterfactual explanation} for $x^0$, i.e. a $x^{0, \\text{cf}} \\in \\mathcal{X}$ such that $\\mathcal{C}(x^{0, \\text{cf}}) \\neq \\mathcal{C}(x^0)$. Of course, the simple requirement that $x^0$ and $x^{0, \\text{cf}}$ have different outcomes based on classifier $\\mathcal{C}$ (condition that is referred to as \\emph{validity} of $x^{0, \\text{cf}}$~\\citep{dice, verma2020counterfactual}) is a necessary but not sufficient condition to provide ``good'' counterfactual explanations.\n\nThe general formulation can be written as follows~\\citep{karimi2021survey}: \n\\begin{equation}\n \\left\\{\n \\begin{aligned}\n &x^{0, \\text{cf}} = \\arg\\min_{x\\in\\mathcal{P}_\\mathcal{X}}\\ dist(x, x^0), \\\\\n &\\mathcal{C}(x) \\neq \\mathcal{C}(x^0);\n \\end{aligned}\\right.\n \\label{eq:cff}\n\\end{equation}\nwhere $dist: \\mathcal{X} \\times \\mathcal{X} \\rightarrow \\mathbb{R}^+$ is a suitable distance function over $\\mathcal{X}$.\nThe solution of problem~\\eqref{eq:cff} provides the \\emph{nearest counterfactual explanation} relative to observation $x^0$.\nThe space $\\mathcal{P}_\\mathcal{X}\\subseteq\\mathcal{X}$ is the subset of feature space $\\mathcal{X}$ containing \\emph{plausible} counterfactuals, i.e. it embodies a set of requirements that $x^{0, \\text{cf}}$ should have in order to represent a realistic set of features with respect to the distribution of training data.\n\n\nFollowing~\\cite{karimi2020algorithmic}, we shall distinguish between \\emph{plausibility} and \\emph{feasibility} constraints.\nPlausibility constraints refer to all the requirements expressed in feature space that go in the direction of having counterfactual explanations realistic with respect to the observed distribution. Feasibility, on the other hand, refers to the fact that a specific counterfactual $x^{0, \\text{cf}}$ is actually reachable with a set of actions from the original observation $x^0$. The following example will help clarifying the distinction. An individual who is denied a loan may receive a counterfactual explanation where the age is reduced. While this may be perfectly plausible in terms of observed distribution (namely, there are observations in line with the proposed counterfactual) this is definitely not feasible for that individual to reach the proposed counterfactual.\nThis distinction is valuable in particular when discussing \\emph{recommendations} besides explanations, i.e. the algorithmic recourse problem (see section~\\ref{sec:recourse}).\n\n\nThe optimization problem~\\eqref{eq:cff} is often relaxed in terms of an unconstrained loss minimization problem (see e.g. \\cite{wachter2017counterfactual, alibi, mace}):\n\\begin{equation}\n x^{0, \\text{cf}} = \\arg\\min_{x \\in \\mathcal{X}}\\left(L_y(x, x^0) + \\lambda\\ dist(x, x^0) + \\sum_i \\beta_i L_P^i(x, x^0)\\right);\n\\label{eq:loss}\n\\end{equation}\nwhere we have:\n\\begin{itemize}\n \\item the $L_y$ term pushing the outcome $y$ corresponding to $x$ away from that of $x^0$, i.e. pushing towards the \\emph{validity} of $x^{0, \\text{cf}}$;\n \\item the $dist$ term keeping $x$ close to $x^0$ in feature space (\\emph{proximity}),\n \\item the $L_P^i$ terms guiding the solution towards \\emph{plausible} points in $\\mathcal{X}$.\n\\end{itemize}\nThe parameters $\\lambda, \\{\\beta_i\\}_i$ control the relative importance of each term. \nAs mentioned in the previous section, several proposal have been put forward for each of the terms in the loss \\eqref{eq:loss}, and in particular for the plausibility terms, to have more realistic, and thus useful explanations to be given to end users. For example, \\cite{alibi} propose to use a term that penalizes the distance between $x$ and the nearest prototype of the class other than $\\mathcal{C}(x^0)$, while \\cite{dhurandhar2018explanations} add a term penalizing the distance between $x$ and its reconstruction by an autoencoder trained on the given dataset.\n\nWe here restrict to the case in which, for a given instance $x^0$, a unique counterfactual $x^{0, \\text{cf}}$ is found, but it is also reasonable to suggest multiple counterfactuals per observation, which can be done either by simply running the minimization problem \\eqref{eq:loss} multiple times with different random seeds as in \\cite{wachter2017counterfactual}, or changing the formulation \\eqref{eq:loss} to account for direct minimization over multiple counterfactual candidates, possibly diverse from each other to capture different aspects of the overall explanation, see e.g. \\cite{dice, mace}.\n\n\\subsection{Counterfactual explanations in latent space}\n\nWe propose an algorithmic approach that builds on an arbitrary counterfactual explanation optimizer (namely, a strategy for solving a specific formulation of problem~\\eqref{eq:loss}) but is able to find counterfactuals taking into account the underlying causal structure by design. In brief, we propose to generate explanations and corresponding recommendations by searching for nearest counterfactuals not in feature space, but in a latent space representing the residual degrees of freedom once the causal structure of the problem at hand is taken into account.\nThis approach has the advantage of providing the end users with feasible actions to reach a desired outcome, and of doing so with --- roughly speaking --- a ``simple'' change of variables on top of existing methodologies.\n\n\nIn doing so, we make use of causal graphs and Structural Casual Models (SCM) (which we discuss in more detail in subsections~\\ref{sec:graph} and \\ref{sec:SE}, respectively). This is in line with what proposed in \\cite{karimi2020algorithmic}, but expands it in that we provide a methodology to compute causality-based counterfactual explanations without the necessity of a closed form SCM, and that can be used on top of any counterfactual generator, such as the ones proposed in \\cite{wachter2017counterfactual}, \\cite{alibi} or \\cite{dice}. Indeed, other approaches to incorporate causality in counterfactual explanations generators either rely on an \\emph{ad hoc} constrained optimization strategy~\\citep{karimi2020algorithmic}, or they face the problem by adding additional terms in the loss~\\eqref{eq:loss}~\\citep{mahajan2019preserving}. \n\n\nThe contribution of our proposal is twofold: providing a straightforward way to incorporate causality relationships into the generation of counterfactual explanations for an arbitrary choice of the baseline generator of explanations; providing, besides counterfactual explanations, causal-aware recommendations for algorithmic recourse.\n\n\n\n\nIn a nutshell, our proposal can be summed up as: \n\\begin{enumerate}\n \\item use the SCM to translate the problem from feature space to the space of exogenous and root variables, that we shall call \\emph{latent space} hereafter,\n \\item apply an arbitrary counterfactual explanation optimizer on latent space,\n \\item translate counterfactuals back to the original feature space.\n\\end{enumerate} \n\n\n\n\n\\subsection{Causal graph}\n\\label{sec:graph}\n\n\nOur solution requires to access to a predefined causal graph that encodes the causal relationships among the variables of the dataset. Modeling causal knowledge is complex and challenging since it requires an actual understanding of the relations, beyond statistical evidence. Different causal discovery algorithms have been proposed to identify causal relationships from observational data through automatic methods~\\citep{glymour2019review}. For example, the Python \\texttt{Causal Discovery ToolBox}~\\citep{kalainathan2020causal} includes many existing causal modeling algorithms such as PC~\\citep{spirtes2000causation}, Structural Agnostic Model (SAM)~\\citep{sam}, Max-Min Parents and Children (MMPC)~\\citep{tsamardinos2003time}, etc.; the Python library \\texttt{CausalNex} implements the NOTEARS alogorithm by~\\cite{zheng2018dags}; the R packages \\texttt{pcalg}~\\citep{kalisch2012causal}, \\texttt{kpcalg}~\\citep{kpcalg}, \\texttt{bnlearn}~\\citep{bnlearn} include a vast selection of causal influence algorithms. In general, it is important that domain experts validate the relations detected by the causal discovery routine, or include new ones when deemed necessary. Moreover, experimentation based causal inference is also possible in some specific circumstances, e.g. via randomized experiments, and also by a combination of the observational and experimental methodologies~\\citep{mooij2020joint}. \n\n\n\nAs usual, we model the underlying causal relationships among features by means of a Directed Acyclic Graph (DAG) $G = (V, E)$, with $V$ set of vertices (or nodes) and $E$ set of directed edges (or links). Nodes of the graph $G$ are composed by the actual variables $X = (X_1, \\ldots, X_d)$ used as predictors in the model. Moreover, we denote with $U$ exogenous variables, representing factors not accounted for by the features $X$, and with $Y$ the dependent variable to be predicted\/estimated by means of $X$. In causal graph theory, edges represent not only conditional dependence relations, but are interpreted as the causal impact that the source variable has on the target variable. We refer to nodes with no parents in $G$ as \\emph{root nodes}, namely \n\\[\n \\text{root nodes} = \\left\\{v \\in G\\ \\rvert\\ \\pa(v) = \\emptyset\\right\\}.\n\\]\n\nA Structural Causal Model (SCM) is a a triplet $(X, U, F) $ where $F: \\mathcal{U} \\rightarrow \\mathcal{X}$ is a set of functions mapping the exogenous (unobserved) variables to the endogenous (observed) ones: $X = F(U)$. These are called Structural Equations (SE) and, besides describing which variables causally impacts which (that is already encoded in the graph $G$), they also determine \\emph{how} these relations work. Therefore, SCM prescriptions are much stronger than simply prescribing a DAG.\n\n\nThe dataset $D_n = \\left\\{(x^1, y^1), \\ldots, (x^n,y^n)\\right\\}$ is composed by $n$ i.i.d. realizations of $(X, Y)$. Each $x_i$ is a $d$-dimensional vector, each component representing an observed feature. In the same fashion, $\\left\\{u^1, \\ldots, u^n\\right\\}$ represent the realizations of the unobserved variables $U$.\n\n\n\n\n\\subsection{Structural Equations}\\label{sec:SE}\n\nStructural Equations are relations describing the precise functional form that links latent variables $U$ to observable ones $X$. Assuming an Additive Noise Model (ANM) we have the following:\n\\begin{equation}\n X_v = f_v(\\pa(X_v)) + U_v,\\quad v = 1, \\ldots, d.\n\\label{eq:ANM}\n\\end{equation}\n\nBesides this assumption, we also assume \\emph{causal sufficiency}, i.e. that there are no counfounders unaccounted for in the specified DAG.\n\nIn figure~\\ref{fig:german_graph} we see an example of DAG representing the causal relationships \\cite{karimi2020algorithmic} in the German credit dataset~\\citep{german}. Age and gender are root nodes, i.e. they are not caused by any other variable. $U_1, \\ldots, U_4$ are the latent variables representing all the unobserved external causes. The unobserved causes for root notes, namely $U_1$ and $U_2$, are indeed redundant, namely $X_\\text{root nodes}=U_\\text{root nodes}$. The SE relative to figure~\\ref{fig:german_graph} have the following form:\n\\begin{subequations}\n\\begin{align}\n &\\text{age} = U_1,\\\\\n &\\text{gender} = U_2,\\\\\n &\\text{amount} = f_3(\\text{age}, \\text{gender}) + U_3,\\\\\n &\\text{duration} = f_4(\\text{amount}) + U_4.\n\\end{align}\n\\label{eq:SE_german}\n\\end{subequations}\n\n\\input{figs\/german_graph}\n\n\nIn what follows, instead of specifying\/assuming a precise form for each $f_v$ in equation \\eqref{eq:ANM}, we are going to infer them from observations, namely from the collection $\\{x^1, \\ldots, x^n\\}$. Specifically, in the spirit of~\\cite{pawlowski2020deep}, we shall learn a regressor model $\\mathcal{M}_v$ estimating $X_v$ from $\\pa(X_v)$, and then compute the unobserved term as the residual:\n\\begin{equation} \n \\hat{U}_v = X_v - \\hat{X_v} = X_v - \\mathcal{M}_v(\\pa(X_v)),\\quad v = 1, \\ldots, d; \n\\label{eq:residual}\n\\end{equation}\nwhich is the equivalent of $\\hat{U} = \\hat{F}^{-1}(X)$, where $\\hat{F}$ are the SE estimated by data through $\\mathcal{M}_v$ as discussed below.\n\nFor root nodes the model $\\mathcal{M}_v$ is of course useless, and all the variability is encoded in the latent variable $U_v$.\\footnote{It is just a matter of notation whether to introduce auxiliary latent variables also for root nodes, that we decide to follow for consistency of the formulation.}\n\nSince for root nodes $r$ SE simply reduce to $F_r(U) = U_r$, once all models $\\mathcal{M}_v$ are learned, following the causal flow in the DAG it is possible to recursively compute the actual function $F$ connecting $U$ to $X$ --- namely $X = F(U)$ --- by the following relation\\footnote{With a slight abuse of notation we omit the estimation symbol $\\hat{}$ over $F$ from here on.}:\n\\begin{equation}\n F_j(U) = \\mathcal{M}_j\\left(\\left\\{F_v(U)\\right\\}_{v\\in\\pa(X_j)}\\right) + U_j,\\quad \\forall j\\ \\text{non-root}.\n \\label{eq:ANM2}\n\\end{equation}\n\nThe procedure is summed up in Algorithm~\\ref{alg:SE}. \n\n\n\\subsection{Model in the latent space}\n\n\nGiven the model $M$ with score function $R(x)$ as by \\eqref{eq:M} and Structural Equations $X=F(U)$, it is straightforward to build their composition, effectively obtaining a model estimating $Y$ given $U$:\n\\begin{equation}\n \\mathcal{C}_u(u) = (\\mathcal{C} \\circ F)(u) = \\bm{1}_{\\{R(F(u)) > t\\}},\n\\end{equation}\nwhere $R(F(u))$ is an estimate of $P(Y = 1\\ \\rvert\\ U=u)$. Notice that the model $\\mathcal{C}_u$ works precisely by following the causal flow of the underlying causal graph and its SCM. Namely, given some realization of the exogenous factors $U=u$, builds the corresponding values for the observed features by recursively applying \\eqref{eq:ANM2}, i.e. by following the causal flow, and then predicts $Y$ by means of the initial model $\\mathcal{C}$. Of course, we don't have access to exogenous variables, but this is not an issue when computing counterfactual explanations, since in that case we only need the value for $U = u^0$ corresponding to the instance that we need to explain ($x^0$) --- and possibly the values $\\{u^1,\\ldots, u^n\\}$ corresponding to the training dataset $\\mathcal{D}$ --- and these can be estimated by means of equation~\\eqref{eq:residual}.\n\nIn other words, $\\mathcal{C}_u$ is a model that \ntakes in input $u$ in the residual (or latent) space, converts it in the original space $\\mathcal{X}$ thanks to the SE $F(u)$ and then predicts its corresponding $Y$ with the model $\\mathcal{C}$.\n\n\n\\subsection{Counterfactual generation}\\label{sec:count_gen}\n\n\nOnce we have obtained the model $\\mathcal{C}_u$ relating $U$ and $Y$, there are three more steps left to obtain the causal counterfactual explanation:\n\\begin{itemize}\n \\item compute the latent variables $\\{u^0, u^1, \\ldots, u^n\\}$,\n \\item generate a counterfactual explanation $u^{0, \\text{cf}} = u^0 + a$ of the observation $u^0$ for the model $\\mathcal{C}_u$,\n \\item given $u^{0, \\text{cf}}$, compute the corresponding feature space counterfactual $x^{0, \\text{cf}}=F(u^{0, \\text{cf}})$.\n\\end{itemize}\nThe procedure is summed up in Algorithm~\\ref{alg:CF}. \n\nNotice that these 3 steps reflect the usual steps of causal counterfactual computation~\\citep{pearl2016causal}: abduction, intervention and prediction. \n\n\\emph{Abduction} is the phase in which the possible events are restricted by the observation of the actual state of the world, namely $X=x^0$. In our framework, computing $U \\ \\rvert\\ \\{X = x^0\\}$ is done via equation~\\eqref{eq:residual}, i.e. as the residuals of regression models $\\mathcal{M}_v$. \nTaking the example of the German dataset \\eqref{eq:SE_german}, we would have:\n\\begin{subequations}\n\\begin{align}\n &u^0_1 = \\text{age}^0,\\\\\n &u_2^0 = \\text{gender}^0,\\\\\n &u_3^0 = f_3(\\text{age}^0, \\text{gender}^0) - \\text{amount}^0 ,\\\\\n &u_4^0 = \\text{duration}^0 - f_4(\\text{amount}^0),\n\\end{align}\n\\label{eq:abduction_german}\n\\end{subequations}\nwhere it is understood that the $f_i$'s need to be either known or estimated, e.g. as regressors $\\mathcal{M}_i$.\n\n\n\\emph{Intervention} is the process of acting upon some variables and fixing it to some specific value. In our framework the actual intervention $a$ is computed --- via the minimization problem \\eqref{eq:loss} applied to the model $\\mathcal{C}_u$ for the observation $u^0$ --- as the minimal shift in latent space needed to reach a different outcome with respect to $\\mathcal{C}_u$, namely $u^{0, \\text{cf}} = u^0 + a$. Being it a shift on exogenous variables, this is actually a form of \\emph{soft intervention} (see section~\\ref{sec:recourse} for more details).\n\n\\emph{Prediction} step is the moment in which we compute the values of the observed variables $X$ given the latent $U=u^0$ and given the intervention $a$. In our framework, this is nothing but $x^{0, \\text{cf}}=F(a + u^0)$.\nTaking again as reference the example of the German dataset \\eqref{eq:SE_german}, \\eqref{eq:abduction_german}, we have:\n\\begin{subequations}\n\\begin{align}\n &\\text{age}^{0, \\text{cf}} = u^0_1 +a_1,\\\\\n &\\text{gender}^{0, \\text{cf}} = u_2^0 + a_2,\\\\\n &\\text{amount}^{0, \\text{cf}} = f_3(\\text{age}^{0, \\text{cf}}, \\text{gender}^{0, \\text{cf}}) + u_3^0 + a_3 ,\\\\\n &\\text{duration}^{0, \\text{cf}} = f_4(\\text{amount}^{0, \\text{cf}}) + u_4^0 + a_4.\n\\end{align}\n\\label{eq:SE_prediction}\n\\end{subequations}\n\nIn the next section we discuss in more detail the role of actions and interventions in (causal) counterfactual explanations.\n\n\n\n\n\n\n\n\n\n\\begin{algorithm}\n\\footnotesize\n\\caption{Infer Structural Equations $F(U)$}\n\\label{alg:SE}\n\n\\SetKwInOut{Input}{input}\\SetKwInOut{Output}{output}\n\\Input{Dataset of observed variables $\\left\\{x^1, \\ldots,x^n\\right\\}$, causal graph $G = (V, E)$}\n\\Output{structural equations $F(U)$, $\\{\\mathcal{M}_v\\}_{v \\in V}$}\n\n\\BlankLine\n\n\\tcp{define auxiliary set of nodes for which Structural Equations are already computed, and initialize it with root nodes}\n$V_\\text{temp}=\\left\\{v\\in V\\ \\rvert\\ \\pa(v)=\\emptyset \\right\\}$\\\\\n$F_v(U) = U_v$ for $v$ root nodes, i.e. such that $\\pa(v) = \\emptyset$\\\\\n\\tcp{loop until SE are computed for all nodes}\n\\While{$V_\\text{temp}\\neq V$ }{\n \\tcp{loop over nodes in $G$}\n \\For{$v$ in $V$}{\n \\If{$\\pa(v) \\subseteq V_\\text{temp}$ and $v \\notin V_\\text{temp}$}{\n train a model $\\mathcal{M}_v$ estimating $X_v$ from $X_{\\pa(v)}$ using data $\\{x^1, \\ldots, x^n\\}$\\\\\n \n \n \\tcp{build structural equations under additive model hypothesis}\n $F_v(U) = \\mathcal{M}_v\\left(\\{F_j(U)\\}_{j\\in\\pa(v)}\\right)+U_v$\\\\\n \\tcp{add node $v$ into the list of nodes for which SE are already computed}\n $V_\\text{temp} = V_\\text{temp} \\cup \\{v\\}$\\\\\n }\n }\n}\n\\KwResult{structural equations $F_v(U)$, $\\mathcal{M}_v$ for all $v\\in G$.}\n\\BlankLine\n\n\\end{algorithm}\n\n\n\n\n\\begin{algorithm}\n\\footnotesize\n\\caption{Train Model $\\mathcal{C}$ and generate counterfactuals from residuals}\n\\label{alg:CF}\n\n\\SetKwInOut{Input}{input}\\SetKwInOut{Output}{output}\n\\Input{dataset of observed variables $\\left\\{(x^1, y^1), \\ldots,(x^n, y^n)\\right\\}$, factual observation $(x^0, y^0)$, structural equations $F(U)$, $\\{\\mathcal{M}\\}_{v\\in V}$}\n\\Output{counterfactual satisfying causal constrains $x^{0, \\text{cf}}$ and action $a$}\n\n\\BlankLine\n\nTrain a classifier $\\mathcal{C}$ with input dataset $\\left\\{x^1, \\ldots,x^n\\right\\}$ and target $\\left\\{y^1, \\ldots,y^n\\right\\}$.\\\\\n\\tcp{Build a model that estimates $y$'s from exogenous and root nodes}\n$\\mathcal{C}_u(U) = \\mathcal{C}(F(U))$, $\\mathcal{C}_u: U \\mapsto Y$\\\\\n\\tcp{Generate unobserved variables $\\{u^0, u^1, \\ldots, u^n\\}$}\n$V_\\text{root}=\\left\\{v\\in V\\ \\rvert\\ \\pa(v)=\\emptyset \\right\\}$\\\\\n$u^i_v = x^i_v,\\ \\forall v \\in V_\\text{root},\\ \\forall i=1,\\ldots,n$\\\\\n$u^i_v = x^i_v - \\mathcal{M}_v\\left(\\{x^i_j\\}_{j\\in\\pa(v)}\\right),\\ \\forall v \\notin V_\\text{root},\\ \\forall i=0,\\ldots,n$\\\\\n\\tcp{Generate explanation in latent space with a counterfactual generator CF}\n$u^{0, \\text{cf}}=\\text{CF}\\left(\\mathcal{C}_u, \\left\\{(u^i, y^i)\\right\\}_{i=1}^n; u^0, y^0\\right)$\\\\\n\\tcp{Compute action}\n$a = u^{0, \\text{cf}} - u^0$\\\\\n\\tcp{Compute explanation in feature space}\n$x^{0, \\text{cf}} = F(u^{0, \\text{cf}})$\\\\\n\n \\KwResult{action $a$, counterfactual explanation $x^{0, \\text{cf}}$}\n \\BlankLine\n\n\\end{algorithm}\n\n\n\n\n\n\n\n\n\n\\section{Counterfactual explanations and recommended actions \\label{sec:recourse}}\n\n\nCounterfactual explanations defined as the nearest possible point to the original observation in feature space such that the model outcome is changed, may have problems in terms of realistic \\emph{feasibility}. Indeed, while they may result useful in \\emph{explaining} the reasons of an outcome for a specific instance --- by showing what features should have been different --- they may incur in problems when counterfactuals want to be seen as possible \\emph{recommendations} on how to take \\emph{actions} in order to get a different outcome. \n\\citet{ustun2019actionable} introduces the framework of \\emph{actionable recourse}, which tries precisely to fill the gap between counterfactual explanations and counterfactual recommendations.\n\nFollowing \\cite{karimi2020algorithmic, karimi2021survey}, it is useful to draw a line between the notions of \\emph{plausibility} and \\emph{feasibility} (or \\emph{actionability}). As mentioned in section~\\ref{sec:prob_setting}, we talk about plausibility constraints whenever we refer to conditions \\emph{on the feature space} that pertain to have a realistic counterfactual explanation with respect to the training data distribution. Feasibility constraints, instead, are conditions on the \\emph{actions} needed to reach some point in feature space. \nA couple of examples can clarify the apparent redundancy of these two concepts: a person with a low credit rating is unlikely to be granted a loan, thus an intuitive way to provide a counterfactual explanation is to suggest a profile with the same features but higher rating. One problem with this is that it may be an \\emph{unrealistic} profile: since there may be other features correlated with rating, and thus the suggested profile may in fact be an outlier for the true distribution. Besides, there is another problem: how can the loan applicant \\emph{reach} the suggested profile? Obviously, he cannot force it's rating to be higher: rating can change, but only as a consequence of the changes in other features, and these changes are not prescribed in the suggested profile. Therefore, the suggested profile is neither plausible nor feasible, for two different reasons. Take now the scenario in which a person is denied a loan because he is too old: in this case suggesting to be younger is of course useless since it is not feasible, but the resulting suggested profile would be, in general, perfectly plausible in terms of features. \n\n\nDepending on the behavior with respect to actions, it is also useful to define (see~\\cite{karimi2020algorithmic}):\n\\begin{itemize}\n \\item \\textbf{immutable} features, as those that cannot change in any way, neither for direct intervention nor for indirect consequences of changes in other variables;\n \\item \\textbf{mutable but non-actionable} features, as those that can change due to the changes in other connected features, but cannot be directly intervened upon (such as rating in the example above);\n \\item \\textbf{actionable} features, as the ones that can vary both due to indirect and direct interventions.\n\\end{itemize}\n\n\nRecourse problem is defined by~\\citet{ustun2019actionable} as a constrained optimization problem very similar to that of the nearest counterfactual~\\eqref{eq:cff}:\n\\begin{equation}\n \\left\\{\n \\begin{aligned}\n &a^* = \\arg\\min_{a\\in\\mathcal{\\mathcal{F}_\\mathcal{A}}}\\ cost(a, x^0), \\\\\n &\\mathcal{C}(x^0 + a) \\neq \\mathcal{C}(x^0),\\\\\n &x^0 + a \\in \\mathcal{P}_\\mathcal{X};\n \\end{aligned}\\right.\n \\label{eq:recourse}\n\\end{equation}\nwhere $a^*$ is the ``cheapest'' \\emph{action} --- in terms of the cost function $cost(a, x)$ --- that the individual identified by $x^0$ needs to perform in order to reach a different model outcome. The space $\\mathcal{F}_\\mathcal{A} \\subseteq\\mathcal{A}$ is that of all the actions $\\mathcal{A}$ that are \\emph{feasible}. \nIt is straightforward to notice that, once defined $x = x^0 - a$ and $cost(a, x^0) = dist(x, x^0)$, the recourse problem~\\eqref{eq:recourse} is equivalent to \\eqref{eq:cff} a part for the explicit formulation of feasibility constraints. \n\nThe problem with~\\eqref{eq:recourse} is that it does not take into account the interdependence among variables and the fact that, in general, the change in a variable comes with changes in other variables as well. \nThe natural framework to discuss how this interdependence impacts actions and counterfactual explanations is that of causal reasoning and in particular the notion of \\emph{hard} and \\emph{soft interventions}~\\citep{eberhardt2007interventions} that we here try to summarize.\nFirst of all, notice that the action computed as discussed in section~\\ref{sec:count_gen}, namely:\n\\[\n a = u^{0, \\text{cf}} - u^0,\n\\]\ndiffers for root nodes and non-root nodes. For root nodes there is no difference between the latent variable $U_v$ and the feature $X_v$, thus in this case the action is simply the difference between the original and the counterfactual values of that feature. For non-root nodes, instead, the situation is different.\nIn general, we can express the relation between the real world features and the action with a straightforward application of SE, namely:\n\\begin{equation}\n x^{\\text{cf}}_v - x_v = F(u^{\\text{cf}}) - F(u) = f_v(\\pa(x^{\\text{cf}}_v)) - f_v(\\pa(x_v)) + a_v,\n\\end{equation}\ni.e., the change in each feature is the sum of the change as a consequence of its parents' change and an explicit intervention. This is usually called a \\emph{soft intervention}~\\citep{eberhardt2007interventions}, since the explicit action $a_v$ is performed \\emph{in addition} to the changes due to interventions on the ancestors. \nIn contrast, an \\emph{hard intervention} is identified by the following formula\n\\begin{equation}\n x^{\\text{cf}}_v - x_v = a_v,\n\\end{equation}\ni.e. when acting on a variable $X_v$ we force it to assume the value $x_v + a_v$, somehow ``destroying'' or overriding any change due to its ancestors' changes. Therefore, our proposal is actually implementing by default soft interventions. Notice, however, that hard interventions are by far less interesting, and they are in any case much easier to account for: if we want to force an hard intervention on a variable then it is sufficient to cut out all the corresponding incoming edges in the causal graph. Indeed, as mentioned, for root nodes there is no distinction between hard and soft interventions. In other words, hard interventions are simply soft interventions on a modified casual graph.\n\nIn general, if we label with $\\bm{N},\\bm{I}$ the set of immutable and actionable features, respectively, we can write the intervention as follows:\n\\begin{equation}\n x^{\\text{cf}}_v - x_v = \\left[f_v(\\pa(x^{\\text{cf}}_v)) - f_v(\\pa(x_v))\\right]\\bm{1}_{X_v \\notin \\bm{N}} + a_v\\bm{1}_{X_v\\in\\bm{I}}.\n\\end{equation}\nAt the practical level, imposing that a variable is mutable but non-actionable is straightforward in our implementation: it simply results in keeping the corresponding latent space variable fixed. \nOn the other hand, imposing that a variable $A$ is immutable is straightforward when it is a root node, while if it has parents we need to consider carefully what \\emph{immutable} means in the specific situation: if it means that $A$ must be kept fixed irrespective of the values of its parents, then it is equivalent to perform a null hard intervention, i.e. we need to change the graph by removing its incoming edges and at the same time freeze its latent variable $U_A$; if it means that a variable is fixed and that all the variables that can causally impact it should be fixed as well, then we need to treat as non-actionable both $A$ and all its ancestors up to the root level, i.e. we cannot change any of $A$ ancestors, or it would result in possible changes in $A$.\nIn the experiments we have done (see section~\\ref{sec:experiments}) we consider immutable features only at the root level, thus the above distinction on the concept of immutability does not apply.\n\n\nIt is now clear that in the standard recourse optimization problem~\\eqref{eq:recourse}, the action definition as \\mbox{$x^{0, \\text{cf}} = x^0 + a$} is equivalent to saying that all actionable variables are acted upon via hard interventions, i.e. each action is seen as enforcing a change in the feature overriding any other change due to variables interdependence. While this may be realistic in some specific cases, it cannot be taken as a paradigm for the general picture. Indeed, a combination of hard and soft interventions is very likely to be the most common situation. \n\nMoreover, even if we assume realistic that all actions are hard interventions, there still may be a causal flow impacting mutable but non-actionable variables. Therefore, neglecting completely the causal structure and sticking to the formulation~\\eqref{eq:recourse} is equivalent to both assuming that all actions are hard interventions \\emph{and} that there are no mutable but non-actionable variables. Indeed, without a causal structure, the distinction between immutable and non-actionable variables loses meaning.\n\n\n\n\n\n\n\\subsection{Computing \\emph{ex post} actions of a given counterfactual explanation}\n\\label{sec:action_feature_space}\n\nWe have tried to clarify the reasons why computing algorithmic recourse without taking into account the causal structure of predictors results in a very specific and not much realistic form of interventions. We now try to address the following issue: instead of finding counterfactual explanations via a latent space representation and then computing actions as differences in the latent variables, why don't we find counterfactual explanations $x^*$ via ``standard'' algorithms, namely via equation \\eqref{eq:cff} or \\eqref{eq:loss}, and \\emph{then} find the actions that, given the SCM, would lead to that counterfactual profile?\n\nThis program is perfectly pursuable by simply computing \\[\na = u^* - u^0,\n\\]\nwhere $u^*=F^{-1}(x^*)$ and $u^0=F^{-1}(x^0)$ are the residuals with respect to the given SCM of the found counterfactual $x^*$ and the original observation $x^0$, respectively. We refer to this approach as computing the \\emph{ex post} actions, and it would indeed save all the effort to translate the model into the latent space, and it would reduce to the much simpler task of computing residuals. The drawback is that the actions found will not, in general, satisfy feasibility constraints, since causality is here considered only in retrospective, and there's nothing preventing the found counterfactual $x^*$ to be unreachable with respect to underlying causal structure.\n\nA toy example will clarify this: suppose you have two variables $A$ and $B$, where, e.g, $B = \\alpha A + U_B$, with $\\alpha > 0$. Namely, an increase in $A$ causes a linear increase in $B$. Suppose also that, given an observation $x^0$, the probability of finding a valid counterfactual is higher in regions where $A$ is higher but $B$ don't. Then, finding $x^*$ will likely get to a situation where $A$ is higher and $B$ is fixed. In terms of actions, this is only possible when $B$ is intervened upon with a \\emph{decrease} in order to compensate for the increase caused by $A$. If, for any reason, there was a feasibility constraint on actions on $B$ such that only increasing interventions are allowed, then the found counterfactual $x^*$ would not correspond to a set of feasible actions.\n\nTherefore, even if it would be in principle possible to compute counterfactual explanations via the standard optimization problem \\eqref{eq:loss} in feature space and \\emph{then}, given a SCM, compute the corresponding actions to be used as recommendations, this simpler procedure fails to satisfy, in general, feasibility constraints. Of course there are cases in which this simpler approach results in outcomes very similar to the CEILS outcomes: these are the cases in which feasibility constraints ``don't mix with'' underlying causality. Indeed, feasibility problems arise when there are constraints on a variable $B$ having mutable parents ($A$ in the example above). In this case, the changes in $B$ due to the changes in its parents --- that are not ``seen'' by the standard counterfactual optimizer --- may result in an action on $B$ that is no more compatible with the feasibility constraints. We shall discuss more on this in the next section, devoted to experiments.\n\n\n\n\n\\section{Experiments \\label{sec:experiments}}\n\nIn this section, we present the experiments conducted on several datasets to validate the CEILS method. We compare our results with a baseline generator of counterfactual explanations using a set of metrics that captures the particularities of our proposal. Next, we describe the datasets used in the experiments and the experiments setup, detailing the definition of the metrics considered. Finally, we discuss the obtained results. \n\n\\subsection{Datasets}\\label{sec:datasets}\nWe use for the experiments a synthetic dataset, two public datasets (German Credit and Sachs) and a proprietary dataset of the financial domain.\n\n\\paragraph{Synthetic dataset.}\n\n\\input{figs\/synth_graph}\n\nWe generate a toy dataset of 100,000 samples with two features ($X_1$ and $X_2$) and a binary outcome ($Y$) with the following Structural Equations: \n\\begin{subequations}\n\\begin{align}\n &X_1 =U_1,\\quad U_1 \\sim \\mathcal{N}(-1,1)\\\\\n &X_2 =X_1+U_2,\\quad U_2 \\sim \\mathcal{N}(5,1)\\\\\n &Y = \\bm{1}_{(3 X_2 - X_1 + U_Y) > t},\\quad U_Y \\sim \\mathcal{N}(0,1)\n\\end{align}\n\\label{eq:SE_synth}\n\\end{subequations} \nwhere, in the experiment, $t$ is chosen for simplicity as the median value of $(3 X_2 - X_1 + U_Y)$.\n\n\nFigure~\\ref{fig:synth_graph} depicts the causal graph of the dataset. The rationale behind equations~\\eqref{eq:SE_synth} is to have a very simple model that nevertheless allows us to show some crucial aspects of our proposed methodology. In particular, the key feature of equations~\\eqref{eq:SE_synth} is to have a non-root node $X_2$ that has a high impact on the target variable $Y$. In this way, we expect to have interesting results when considering $X_2$ a non-actionable feature. \nTo this end, we define 2 different experiments: first we set $X_2$ to actionable (Synthetic \\#1), then we consider $X_2$ as mutable but non-actionable (Synthetic \\#2). \n\n\n\n\\paragraph{German credit dataset~\\citep{german}.}\n\n\nThis dataset contains financial information of 1,000 applicants who are classified into applicants with high and low risk of defaulting on their loans. We consider a subset of the features in the same way as \\cite{karimi2020algorithmic}. In particular, we use four main features with the causal relations represented in the DAG of figure~\\ref{fig:german_graph}. Moreover, we constrain gender to be immutable and age to increase only.\n\n\n\\paragraph{Sachs dataset \\citep{sachs2005causal}.}\n\nThis dataset contains information on protein expression levels in the human immune system. In particular, it consists of 854 observations with 11 independent measurements of phosphorylated molecules derived from immune system cells, subjected to molecular interventions. We base the experiment on the molecules \\texttt{PKC}, \\texttt{MEK}, \\texttt{Raf} and \\texttt{PKA} to predict \\texttt{Erk}, considering a binary problem with $Y= 1$ when \\texttt{Erk} is above the median value, and $Y=0$ otherwise. The variables are related according to the DAG depicted in figure~\\ref{fig:sachs_graph} (obtained from \\cite{sachs2005causal}). The inhibition and activation of the molecules included in \\cite{sachs2005causal} define the constraints that we impose in our experiment. In particular, we consider \\texttt{Raf} as non-actionable but mutable, we impose that we can act on \\texttt{PKA} only by increasing it and on \\texttt{Mek} only by decreasing it. As reported in \\cite{McCubrey2007roles}, the molecules \\texttt{Raf\/Mek\/Erk} are associated with growth factors and it could be interesting to control \\texttt{Erk}, without intervening directly on it, to prevent cell proliferation and apoptosis.\n\n\n\n\\paragraph{Proprietary dataset.}\n\n\n\n\n\n\n\nWe use a proprietary dataset with 220,304 credit applications~\\citep{befair}. This contains 8 features, namely gender, age, citizenship, monthly income, bank seniority, requested amount, number of payments and rating, and the information about the granting\/non-granting of the loan. The features are related according to the DAG shown in the figure~\\ref{fig:proprietary_graph}. We refer to~\\cite{befair} for more details on data and the corresponding causal graph.\nWe consider gender and citizenship as immutable features and age and bank seniority as features that only can increase in value. Moreover, as in the synthetic case, we run two different experiments, one in which rating is set to actionable (Proprietary \\#1), and one in which it is set to mutable but non-actionable (Proprietary \\#2).\n\n\n\\begin{figure}\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\resizebox{\\linewidth}{!}{\\input{figs\/sachs_graph}} \n \\caption{Sachs dataset.}\n \\label{fig:sachs_graph}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\resizebox{\\linewidth}{!}{\\input{figs\/proprietary_graph}} \n \\caption{Proprietary dataset.}\n \\label{fig:proprietary_graph}\n \\end{subfigure}\n \\caption{Causal graphs employed in the experiments.}\n \\end{figure}\n\n\n\\subsection{Experiments setup}\nFor all the experiments, we model the $\\{\\mathcal{M}_v\\}$ as feed-forward neural networks with 2 hidden layers. The classifier $\\mathcal{C}$ is also modeled as a feed-forward neural network with 2 hidden layers with ReLU activation functions. We employ the open source library \\texttt{TensorFlow} for the implementations~\\citep{tensorflow2015-whitepaper}.\n\n\\subsubsection{Baseline generator of counterfactual explanations}\nWe use as baseline generator of counterfactuals the interpretable counterfactual explanations guided by prototypes~\\citep{alibi} and in particular, the implementation included in the open source library \\texttt{Alibi}~\\citep{alibisoft}. More specifically, we implement a counterfactual generator with loss weights: 0.2 for $L_y$ (\\texttt{kappa}), 100 for $L_\\text{prototype}$ (\\texttt{theta}), 0.5 for the $L_1$ proximity term (\\texttt{beta}), and 0.5 for the $L_{k-d\\ \\text tree}$ term (\\texttt{gamma}). We employ the $k$-$d$ tree term instead of the autoencoder option. We refer to \\cite{alibisoft} for details on these parameters.\n\n\nWe obtain counterfactual explanations first by straight application of the counterfactual generator guided by prototypes (referred to as ``Proto'' in Table~\\ref{tab:results}), and then by overlying our CEILS procedure. \n\nWe evaluate these 2 sets of counterfactual explanations by means of a collection of metrics discussed in section~\\ref{sec:metrics}. These metrics are of course computed only on valid counterfactuals. Notice, however, that the set of valid counterfactual explanations is, in general, dependent on the methodology. Therefore, we also compute the values of the metrics on the \\emph{intersection} of valid counterfactuals explanations obtained from of both methodologies. This is done in order to have a fair comparison, since one of the methods could have, e.g., very good metrics on very few valid counterfactual explanations, namely the ones related to the factual profiles that are the easiest to be explained.\n\nThe evaluation is performed on counterfactual explanations generated for 100 random out-of-sample observations for the German credit dataset, 80 for the Sachs dataset, and on 1,000 for the rest of datasets.\n\n\n\n\n\\subsubsection{Evaluation metrics}\n\\label{sec:metrics}\n\nTable~\\ref{tab:results} shows our experimental results with respect to the following set of metrics (see e.g. \\cite{verma2020counterfactual} and \\cite{karimi2021survey} for a review of potential metrics):\n\n\\begin{itemize}\n \\item \\emph{Validity} is the fraction of generated explanations that are valid counterfactual explanations, i.e. such that $\\mathcal{C}(x^{\\text{cf}}) \\neq \\mathcal{C}(x)$. Thus, it reflects the effectiveness of a method in generating explanations. \n\n\n\n \\item \\emph{Proximity}, as discussed in section~\\ref{sec:prob_setting}, measures how far is the counterfactual explanation from the original instance. Following~\\cite{wachter2017counterfactual} and \\cite{dice}, the proximity of continuous features is computed as the mean of feature-wise $L_1$ distance re-scaled by the Median Absolute Deviation from the median (MAD) of each feature. On the other hand, for categorical features we consider a distance of 1 whenever the counterfactual example differs form the original input:\n \n \\begin{subequations}\n \\begin{align}\n &proximity_{\\text{cont}}(x^{\\text{cf}},x) = \\frac{1}{n\\textsubscript{cont}} \\sum_{p=1}^{n_\\text{cont}} \n \\frac{|{x^{\\text{cf}}_p - x_p}|}{MAD_p},\n \\\\\n &proximity_{\\text{cat}}(x^{\\text{cf}},x) = \\frac{1}{n_\\text{cat}} \\sum_{p=1}^{n_\\text{cat}} \n I(x^{\\text{cf}}_p \\neq x_p);\n \\end{align}\n \\label{eq:proximity}\n \\end{subequations}\n \n where $n_\\text{cont}$ and $n_\\text{cat}$ are the number of continuous and categorical features respectively, and $MAD_\\text{p}$ is the Median Absolute Deviation from the median for the $p$-th continuous variable.\n \n \n \n \\item \\emph{Sparsity} measures the number of features changes that distinguish the counterfactual explanation from the original instance. In particular, to identify relevant perturbations we consider a threshold as in \\cite{dice}: \n $t = \\min(MAD(f), q_{10}(|\\w{f}-\\text{median}(\\w{f})|))$, $\\w{f}=[f_i: f_i \\neq \\text{median}(f)]$\\\\\n \n \\begin{equation}\n sparsity(x^{\\text{cf}}, x) = \\sum_{p=1}^d \\bm{1}_{\\left\\{\\rvert x^{\\text{cf}}_p - x_p\\rvert > t_p\\right\\}}.\n \\end{equation}\n \n Analogously, we compute sparsity also in terms of actions (see below). \n \n \n \\item \\emph{Distance}, related to the proximity, measures the $L_1$ distance between counterfactual and factual observations:\n \\begin{equation}\n distance(x^{\\text{cf}}, x) = \\lVert x^{\\text{cf}} - x \\rVert_1.\n \\label{eq:distance}\n \\end{equation}\n\\end{itemize}\n\nAll the above metrics refer directly to the counterfactual explanations, thus we refer to them as metrics on feature space. Instead, the following metrics are focused on the evaluation of actions, thus on latent space quantities. Notice that, when considering the baseline method Proto (or, in general, non-causal methods) there is no latent space to be considered, or, equivalently as discussed above, actions are all hard interventions, i.e. shifts in feature space. However, as argued in section~\\ref{sec:action_feature_space}, we could alternatively think of generating counterfactuals with a non-causal method and then compute anyway the corresponding \\emph{ex post} action via the SCM. In the section of table~\\ref{tab:results} devoted to latent space metrics, for rows corresponding to the baseline method we have indeed reported metrics computed with this \\emph{ex post} rationale, thus computing the residuals of the generated counterfactual explanations with respect to the SCM. \n\n\\begin{itemize}\n\n \n \\item \\emph{Cost}, as discussed in section~\\ref{sec:recourse}, is the magnitude of the action needed to reach a counterfactual point. Specifically, we compute the cost as the $L_1$ norm of the action: $\\lVert a\\rVert_1$. In terms of the feature space, considering the SCM we have: \n \\begin{equation}\n cost(x^{\\text{cf}}, x) = \\lVert F^{-1}(x^{\\text{cf}}) - F^{-1}(x)\\rVert_1.\n \\label{eq:cost_causal}\n \\end{equation}\n Notice that this is in fact equivalent to the \\emph{distance} metric, but on the latent space.\n \n\n\n\n \\item \\emph{Feasibility}, as discussed in section~\\ref{sec:recourse}, pertains to the fact that suggested actions are realistic, i.e. actually doable. Therefore, it includes all the requirements upon actions. For example, when a feature is non-actionable any non-null action on that feature is unfeasible. In the same way, when a feature can only increase (e.g. age), any negative action on that feature would be unfeasible. We measure feasibility as the percentage of explanations whose actions are all feasible. Thus, we establish the following formulation:\n \n \\begin{subequations}\n \\begin{align}\n & F^{-1}(x^\\text{cf})_v = F^{-1}(x)_v \\iff a_v = 0,\\quad \\forall v\\ \\text{non-actionable};\\\\\n & F^{-1}(x^\\text{cf})_v \\geq F^{-1}(x)_v\\iff a_v \\geq 0,\\quad \\forall v\\ \\text{increasing only};\\\\\n & F^{-1}(x^\\text{cf})_v \\leq F^{-1}(x)_v\\iff a_v \\leq 0,\\quad \\forall v\\ \\text{decreasing only}.\n \\end{align}\n \\end{subequations}\n \n \\item \\emph{Causal plausibility} is inspired by \\cite{mahajan2019preserving}, who propose to add this term to the loss in problem \\eqref{eq:loss} in order to keep the generated explanations ``as close as possible'' to the underlying causal structure. The intuition is to measure, for each feature, the distance between the found counterfactual observation and the value that the feature should have if perfectly obeying to the SCM, i.e. with null residuals. The idea is to compare $x^\\text{cf}_v$ with $f_v(\\pa(x^\\text{cf}_v))$. To compute this, one has to build the vector \n \\begin{equation*}\n w_v = \\left\\{\n \\begin{aligned} \n & \\mathcal{M}_v(\\pa(x^\\text{cf}_v)), \\quad v\\ \\text{non root}\\\\\n & x^\\text{cf}_v,\\quad v\\ \\text{root}\n \\end{aligned}\n \\right.\n \\end{equation*} \n then: \\begin{equation}\n causal\\ plausibility(x^{\\text{cf}}, x) = \\lVert x^{\\text{cf}} - w \\rVert_1. \n \\end{equation}\n Notice that this is equivalent to computing the $L_1$ norm of the (non-root) residuals of $x^{\\text{cf}}$ with respect to the SCM.\n In other words, this metric measures the distance of the found counterfactual from the profile that satisfies the SCM with zero residuals (except for root nodes).\n\\end{itemize}\n\nMoreover, we compute one more value designed not to compare a counterfactual explanation $x^{\\text{cf}}$ with its factual counterpart $x$ --- as the ones introduced above --- but rather to directly compare two methods for counterfactual generation. In particular, we are interested in comparing our proposed methodology with the baseline methodology to understand the net impact of our approach on the underlying counterfactual generator engine. \nTo this end, we compute \\mbox{$\\lVert (x^{\\text{cf, base}} - x) - a^\\text{CEILS}\\rVert_1$} --- where $(x^{\\text{cf, base}} - x)$ is the recommended action by the baseline generator and $a^\\text{CEILS}$ is the action proposed by our methodology --- to measure whether the two methods are recommending actions in the same direction. Obviously, this metric can be computed only on valid counterfactual explanations common to both methods.\n\n\n\n\n\n\n\n\n\\subsection{Results}\\label{sec:results}\n\n\n\nTable \\ref{tab:results} summarizes the results obtained in the experiments for all the datasets and metrics. As mentioned, we first compute metrics for the explanations obtained with the two methodologies (Proto and Proto + CEILS) and then for the valid explanations common to both methods (grey rows in the Table~\\ref{tab:results}). \n\n\nExcept for validity and feasibility, for each metric we report the median value and the deviation from the median computed over valid counterfactual explanations found by the corresponding method. As mentioned, we include the same computation over valid counterfactuals common to both methods (grey rows in the table).\n\nIn what follows, we first describe the results obtained for each dataset, pointing out the main findings, then we summarize them in section~\\ref{sec:discussion}.\n\n\n\n\\input{tabs\/results}\n\n\\paragraph{Synthetic dataset.} \nFor this dataset we run 2 different experiments, which differ only for the actionability of $X_2$. As discussed in section~\\ref{sec:datasets}, $X_2$ is the feature with highest impact on the target $Y$, and it is causally dependent on $X_1$. Therefore, it is interesting to assess our methodology when $X_2$ is mutable but non-actionable: CEILS should be able to learn how to employ $X_1$ in order to impact $X_2$ and thus $Y$; on the other hand, the baseline Proto cannot, this either treats $X_2$ as actionable, but in this way this suggests the unfeasible action of changing $X_2$, or as non-actionable. However, in this case, the baseline Proto method will not be able to move $X_2$ in any way, hardly resulting in efficient counterfactual generations. \n\nEven if we inspect what actions would have been recommended for the baseline model considering \\emph{ex post} actions (see section~\\ref{sec:action_feature_space}), both strategies ($X_2$ actionable or non-actionable) result in non-feasible suggestions with respect to $X_2$: for the actionable case $X_2$ is changed by the baseline method, but very likely in a way that is not compensated by the change in the parent $X_1$, resulting in unfeasible net action on $X_2$; for the non-actionable case $X_2$ is kept fixed by design in the baseline method, but a non-null action in $X_2$ is unavoidable to keep $X_2$ fixed while changing $X_1$, which is unfeasible. \n\nThese considerations are confirmed by the experimental results in table~\\ref{tab:results} and in the examples shown in table~\\ref{tab:synth_examples}.\n\n\\input{tabs\/synth_examples}\n\nIndeed the overall results relative to Synthetic \\#1 and \\#2 in table~\\ref{tab:results} show that:\n\\begin{itemize}\n \\item in terms of feature space metrics the Proto method performs slightly better, and this is in line with the fact that CEILS focuses on \\emph{nearest actions} rather than nearest explanations;\n \\item if we compare the CEILS cost with the effort done by the baseline, namely $\\lVert x^\\text{cf} - x\\rVert_1$ (i.e. the distance metric), then the gain of using CEILS becomes evident in both runs.\n \n \\item even if we consider the \\emph{ex post} action for the baseline Proto method, we can see that for Synthetic~\\#2 ($X_2$ non-actionable) there is a huge gain in cost, and this is due to the fact that the Proto method pushes $X_1$ in the ``wrong'' direction, since it employs causality only \\emph{after} computing explanations; \n \n \\item analogously, feasibility metric confirms our expectations: in the Synthetic~\\#2 experiment the baseline suggest only unfeasible actions.\n \n \\item causal plausibility, much higher for the baseline in Synthetic \\#2, confirms once more this evidence.\n \n \\item in Synthetic~\\#1, instead, both cost, feasibility and causal proximity are comparable: in this case $X_2$ is actionable, therefore there is no feasibility issue for neither of the two methods.\n\\end{itemize}\n\nTable~\\ref{tab:synth_examples} displays 2 examples of counterfactual explanations for the case in which $X_2$ is retained non-actionable.\nIf we focus on the first example where \\mbox{$Y=0 \\rightarrow Y=1$}, the Prototype method tends to decrease $X_1$ (-0.266) because the model $\\mathcal{C}$ has learnt the negative dependence of $Y$ in $X_1$, and also it can not act on $X_2$. However, CEILS ``knows'' that by decreasing $X_1$ there is a linear decrease in $X_2$, which has a stronger impact on the outcome $Y$. Indeed, in the action column, we see that CEILS effectively suggests to increase $X_1$ (0.103) in order to increase the quantity $3 X_2 - X_1$, and it does so with less overall effort required from the end user.\n\nMoreover, notice that, in line with expectations, if we compute the SCM \\emph{ex post} action for the baseline, then we have non-feasibility for both examples, since there is a non-null action on $X_2$. Also in this case, we see that actions on $X_1$ are done in the ``wrong'' direction.\n\nSimilar arguments can be made for the example 2 included in the table~\\ref{tab:synth_examples} where the counterfactual methods try to modify the target as \\mbox{$Y=1 \\rightarrow Y=0$}.\n\n\n\n\n\n\n\n\\paragraph{German credit dataset.}\nAs shown in table~\\ref{tab:results}, the results do not present any significant difference among the 2 methods. This is not surprising, since the only feasibility constraints are on root nodes (gender kept immutable and age not decreasing).\nAlso in terms of effort there is no apparent discrepancy: the distance obtained with the Proto method and the cost of CEILS are almost identical, i.e. there is no gain given by the causal flow. Also in terms of direction, the two methods seem comparable: \\mbox{$\\lVert (x^{\\text{cf, base}} - x) - a^\\text{CEILS}\\rVert_1$} has a median value of 0.16.\nThis may be due to the fact that the causal impact of age on amount and (then) duration is not strong enough to play a significant role. Similarly, \\emph{ex post} actions have the same overall cost with respect to CEILS actions.\n\nThis experiment shows that the advantage of employing causality on top of standard approaches is not always appreciable, and is highly dependent on the underlying causality structure and on the constraints set over the variables.\n\n\n\\paragraph{Proprietary dataset.}\nFor the proprietary dataset, the behavior of the two methods in not dissimilar from the synthetic case, however this experiment involves much more complex causal relationships, and presents a real-world scenario of credit lending. Here, the role of the feature $X_2$ is played by the feature rating, i.e. a feature extremely important to determine the final outcome, that cannot be controlled directly by the end users, and usually is a complex function of other variables.\n\n\nSimilarly to the synthetic dataset, we run two experiments: Proprietary~\\#1, where we consider rating actionable, and Proprietary~\\#2, where rating is mutable but non-actionable\\footnote{Recall that for methods not accounting for causality, non-actionable is effectively equivalent to immutable, since each feature can change only via direct interventions (see section~\\ref{sec:action_feature_space}).}.\nThe results are in line with the discussion made for the synthetic case, thus we here focus only on some interesting insights peculiar to this case:\n\\begin{itemize}\n \\item in Proprietary~\\#2 we see that the baseline Proto method is much less efficient than CEILS in providing valid counterfactuals: this is due to the high importance of rating in determining the target variable and to the fact that that the Prototype method cannot change rating indirectly, as CEILS does; \n \\item this also explains the odd discrepancy in terms of costs 8.65 for CEILS vs 2.51 for the baseline: this is indeed an artefact of the small number of valid counterfactuals over which this metric is computed for the baseline method, CEILS has higher cost simply because is finding counterfactuals also for ``harder'' cases. If we compute the metrics on the common valid explanations (grey rows in the table), the situation is reversed.\n \\item In Proprietary~\\#2 we have $\\lVert (x^\\text{cf, base} -x) - a^\\text{CEILS}\\rVert_1 = 1.87$, confirming the fact that the two methods suggest recommendations in very different directions.\n\\end{itemize}\n\nTable~\\ref{tab:propietary_examples} shows an example of explanations generated by both methods in the Proprietary~\\#2 setting. As expected gender and citizenship remain fixed, while age and seniority have equal or higher values with respect to the original instance. The baseline method produces a counterfactual explanation with values far away from the factual profile (i.e. increases the income to 3643.3K and almost doubles the requested amount while decreasing the number of installments). On the other hand, CEILS only suggests to increase the bank seniority and the requested amount\\footnote{Both methods apparently provide the counter-intuitive suggestion of increasing the requested amount: this is due to the fact that the baseline method of \\cite{alibi} searches for explanations as close as possible to the training data distribution. In other words, a too small requested amount would not be plausible with respect to the other suggested features.}. Indeed, increasing the bank seniority results in better rating, which is enough to reach the loan approval. Evidently, an increase in seniority is impossible without a corresponding increase in age: actually, we have treated bank seniority as an actionable feature, but it would have been more appropriate to consider it as mutable only as a consequence of age changes, since seniority cannot be controlled independently of age, or to consider an additional common confounder. Nevertheless, we have decided to keep seniority actionable to focus our discussion on rating and not to limit too much the baseline method (for which it would have been impossible to change seniority as well as rating).\nMoreover, notice that considering the \\emph{ex post} action for the prototype method results in having a net \\emph{increase} in rating (i.e worsening). This confirms what discussed in section~\\ref{sec:action_feature_space}, i.e. to keep the rating fixed the non-causal method needs to intervene with a negative action to compensate the change due to the impact of the suggested changes in other features.\n\n\nFigure~\\ref{fig:rating_dist2} shows the distribution of interventions on rating for 2 scenarios: with rating considered either actionable or non-actionable. First, we can notice that the baseline method has non-null \\emph{ex post} actions in both cases, thus meaning that non-causal method cannot in any way account for features that are not directly intervened upon but could vary in response to changes in other variables, while CEILS has obviously null actions on rating in the actionable (but mutable) case. Secondly, we see that when rating has no constraints (the actionable case) then the two distributions are very similar: this underlines once again that our approach --- and in general considering causality in counterfactual explanations generator --- gives results in line with its baseline method when there are no feasibility constraints on important features.\n\n\n\n\n\n\\input{tabs\/proprietary_examples}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{figs\/rating_dist2.pdf}\n \\caption{The distribution of actions on rating for 2 scenarios: rating considered either actionable or non-actionable (which, for the baseline method, is equivalent to immutable). For the prototype method the \\emph{ex post} actions are reported. Values are rescaled to a maximum of 1 for sake of readability.}\n \\label{fig:rating_dist2}\n\\end{figure}\n\n\n\n\\paragraph{Sachs dataset.}\n80 random out-of-sample instances were used to generate counterfactual explanations. These result in 30 valid counterfactuals (37.5\\%) for the baseline method and 18 (22.5\\%) for Proto + CEILS. This can be explained by the fact that Proto + CEILS is satisfying the feasibility constraints (i.e. no action on \\texttt{Raf} and monotonic conditions on \\texttt{PKA} and \\texttt{Mek}) and the compliance to the SCM, while Proto alone is ignoring them, when considering \\emph{ex post} actions. In other words, the constraints on actions and the compliance to the SCM make, in this case, CEILS less effective in terms of valid feature space explanations. \n\nIndeed, the difficulty in providing feasibile counterfactuals can be seen in the intersection part. Proximity, sparsity and distance (metrics in feature space) are higher in Proto + CEILS, while sparsity on action, cost and causal plausibility (metrics in latent space) are all worse for the baseline. Note that Proto + CEILS effort (i.e. cost) (2.50) is slightly lower with respect both to Proto distance (2.64) and also to Proto \\emph{ex post} cost, meaning that there is a gain in the effort to reach the suggested explanation.\n\nInspection of table~\\ref{tab:sachs_examples} clearly reveals that the change in feature space ($\\Delta$) in \\texttt{PKC} and \\texttt{PKA} is comparable in the two methods, but CEILS, forced to satisfy the non-actionability in \\texttt{Raf} and the non-increase in \\texttt{Mek}, has an effective \\emph{decrease} in \\texttt{Raf} and \\texttt{Mek} that the baseline has not. In other words, in this experiment we see that feasibility may also be a friction with respect to providing valid counterfactuals, as it could be expected.\nMoreover, looking at \\texttt{Mek} (31.2) and \\texttt{Raf} (18.0) values for Proto \\emph{ex post} actions, we have once more the confirmation that non-causal methods are not able to account for feasible recommendations with respect to a given SCM.\n\nNotice that the baseline method does have the feasible constraints as well, but they are interpreted as hard interventions --- as discussed in \\ref{sec:recourse}. Thus, as we see by the example in table~\\ref{tab:sachs_examples}, Proto keeps \\texttt{Raf} and \\texttt{Mek} fixed, but it can change the other features independently of these constraints, while CEILS can not, since any change in the variables has impacts on others as by the SCM. In this sense, CEILS is more constrained than its baseline method.\n\n\\input{tabs\/sachs_example}\n\n\n\n\\subsection{Discussion}\n\\label{sec:discussion}\n\nAfter discussing separately the results of each experiment, we can summarize the overall findings as follows: \n\\begin{itemize}\n \\item CEILS provides, in general, counterfactual explanations farther in feature space with respect to the baseline method.\n \\item CEILS is almost always more efficient in providing valid counterfactuals. This is more pronounced and noticeable when there are non-actionable constraints. In this case, the baseline method maybe not able to provide actual counterfactual explanations or they may be too far to be considered valid, whereas CEILS can indirectly act also on non-actionable features taking into account causal influences. \n \\item There are cases (e.g. Sachs dataset experiment) in which feasibility constraints and SCM compliance may result in a form of friction to find valid counterfactuals, as it could be expected.\n \\item Comparing CEILS and its baseline (non-causal) method in terms of effort to reach the explanation, i.e. confronting their costs (distance in latent space for CEILS and distance in feature space for Proto) almost always results in a better performance for CEILS, again due to its ability in exploiting causal relationships.\n \\item If we take into account the underlying SCM for both approaches, then the baseline method exhibits a very poor performance, and in particular it falls short of suggesting feasible actions to reach valid counterfactuals (as argued in section ~\\ref{sec:action_feature_space}).\n\\end{itemize}\n\nThese findings are completely in line with the fact that CEILS effectively focuses on searching the \\emph{nearest counterfactual in latent space}, thus it is optimized to find the less expensive set of actions with respect to an assumed SCM, thus guaranteeing a valid recourse. \n\n\n\n\n\n\\section{Conclusions \\label{sec:conclusions}}\n\n\nAgainst the background of a flourishing literature on Explainable AI and in particular on counterfactual explanations, we have proposed a new approach --- Counterfactual Explanations as Interventions in Latent Space (CEILS) -- with a twofold goal, namely to take into account causality in generating counterfactual explanations and to employ them to provide feasible recommendations for recourse, while at the same time having the important characteristic of being a methodology easily adaptable on top of existing counterfactual generator engines. The experimental results clearly show that there are cases in which the baseline generator would recommend explanation completely unfeasible with respect to the underlying causal structure, while our approach --- on top of the same generator --- is able to provide more realistic and reachable counterfactual profiles, often with less effort. \n\nThis is a first attempt in the direction of the ambitious target of providing the end user with realistic explanations \\emph{and} feasible recommendations to gain the desired output in automatic decision making processes.\n\nAs for future work, we will tackle some limitations of our methodology and open challenges in the field of counterfactual explanations.\nFirstly, it would be important to relax the assumption of having a complete and reliable causal graph, and allow for the possibility of having a causal-aware generator with an underlying \\emph{partial} DAG (e.g. \\cite{mahajan2019preserving} discuss this point in their proposal). Secondly, it would be valuable to find methods to relax the assumption of having a completely deterministic SCM in the form of an additive noise model~\\eqref{eq:ANM} (e.g. \\cite{karimi2020algorithmic2} take steps in this direction). Another assumption that we should address more properly is that of causal sufficiency, namely the fact the DAGs account for all the common causes of the observed variables, which is indeed a strong requirement and virtually impossible to be validated. \nMoreover, in our experiments we employ the methodology of~\\cite{alibi} as a baseline generator for its remarkable characteristic of guiding the optimization process towards regions of the feature space that are close in distribution with respect to the observed data: we would like to analyze in detail how this entangles with our approach of applying the counterfactual generator in latent space rather than in feature space. \n\nFinally, it would be really useful to embed our proposed methodology in user-interaction tools and perform studies both to validate our method, and also to improve it by taking into account user feedback, possibly allowing the users to change, among other parameters, the feasibility constraints on actions.\n\n\n\n\\section*{Acknowledgements}\nWe want to thank Aisha Naseer (Fujitsu), Greta Greco (Intesa Sanpaolo) and Ilaria Penco (Intesa Sanpaolo) for their support in the joint collaboration that has lead to this work.\n\n\n\n\\section*{Disclaimer}\nThe views and opinions expressed within this paper are those of the authors and do not necessarily reflect the official policy or position of Intesa Sanpaolo or Fujitsu. Assumptions made in the analysis, assessments, methodologies, models and results are not reflective of the position of any entity other than the authors.\n\n\n\n\n\\singlespacing\n\n\\bibliographystyle{plainnat} \n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLet $A$ be an $n \\times n$ symmetric matrix with eigenvalues $\\lambda_1 \\le \\cdots \\le \\lambda_n$. We are interested in the eigenproblem $A\\mathbf x = \\lambda \\mathbf x$.\nLet $\\mathbf u \\in \\mathbb R^n$ be an approximate eigenvector to $\\mathbf x$ with unit 2-norm.\nTraditionally, the Rayleigh quotient\n\\begin{equation} \\label{rq}\n\\theta = \\frac{\\mathbf u^T\\!A\\mathbf u}{\\mathbf u^T\\mathbf u} \\qquad (= \\mathbf u^T\\!A\\mathbf u)\n\\end{equation}\nis the standard approach to determine an eigenvalue approximation corresponding to $\\mathbf u$, which yields the corresponding eigenvalue when $\\mathbf u$ is an exact eigenvector.\nThis $\\theta$ satisfies two (related) optimality properties, a Galerkin condition (cf., e.g., \\cite[pp.~13--14]{Par98})\n\\begin{equation} \\label{opt1a}\nA\\mathbf u - \\theta \\mathbf u \\perp \\mathbf u,\n\\end{equation}\nand a minimum residual condition\n\\begin{equation} \\label{opt1b}\n\\theta = \\argmin_{\\gamma} \\, \\|A\\mathbf u-\\gamma \\, \\mathbf u\\|.\n\\end{equation}\nA second well-known quantity is the harmonic Rayleigh quotient, which is, for instance, sometimes considered when one is interested in an eigenvalue near a given target $\\tau \\in \\mathbb R$.\nFirst consider the case $\\tau = 0$; for nonzero $\\mathbf u^T\\!A\\mathbf u$, the harmonic Rayleigh quotient is given by (cf.~\\cite{morgan1991computing})\n\\begin{equation} \\label{hrq}\n\\widetilde \\theta = \\frac{\\mathbf u^T\\!A^2\\,\\mathbf u}{\\mathbf u^T\\!A\\mathbf u}.\n\\end{equation}\nThis quotient satisfies the following two optimality properties: a Galerkin condition\n\\begin{equation} \\label{opt2a}\nA\\mathbf u - \\widetilde \\theta \\mathbf u \\perp A\\mathbf u,\n\\end{equation}\nand a modified minimum residual condition\n\\begin{equation} \\label{opt2b}\n\\widetilde \\theta = \\argmin_{\\gamma} \\, \\|\\gamma^{-1} A\\mathbf u-\\mathbf u\\|.\n\\end{equation}\nCondition \\eqref{opt2b} is easily verified by setting the derivative of $\\|\\gamma^{-1} A\\mathbf u-\\mathbf u\\|$ with respect to $\\gamma$ to zero.\nProperty \\eqref{opt2b} is perhaps less widely known.\nBoth Properties~\\eqref{opt1b} and \\eqref{opt2b} have been exploited as secant conditions to determine a stepsize in gradient-type optimization methods; for instance for the well-known Barzilai and Borwein steplengths \\cite{bb1988}. We return to this topic in Section~\\ref{sec:step}; see also V\\\"omel \\cite{vomel2010note}.\n\nThe harmonic Rayleigh quotient for a target $\\tau \\in \\mathbb R$ is (cf., e.g., \\cite[p.~294]{Ste01})\n\\begin{equation} \\label{hrq-tau}\n\\widetilde \\theta_\\tau = \\frac{\\mathbf u^T(A-\\tau I)\\,A\\mathbf u}{\\mathbf u^T(A-\\tau I)\\,\\mathbf u},\n\\end{equation}\nprovided that the numerator is nonzero.\nThis quotient is also equal to the eigenvalue when $\\mathbf u$ is an eigenvector.\nIt satisfies two optimality conditions: a Galerkin condition $(A-\\widetilde \\theta_{\\tau} I)\\, \\mathbf u \\perp (A-\\tau I)\\,\\mathbf u$, and a minimum residual condition (or secant condition) $\\widetilde \\theta_{\\tau} = \\argmin_{\\gamma} \\|(\\gamma-\\tau)^{-1} (A-\\tau I)\\,\\mathbf u-\\mathbf u\\|$ (cf.~\\cite[Prop.~5]{FHK22}). \n\nThe standard Rayleigh quotient has three natural properties: it is invariant under nonzero scalings of $\\mathbf u$; it is invariant under nonzero scalings of $A$, and it shifts naturally with shifts of $A$.\nThe harmonic Rayleigh quotient only satisfies the first property, but modified properties still hold, as follows.\n\n\\begin{proposition} \\label{prop:hrq}\nFor any $\\tau \\in \\mathbb{R}$, the following properties hold:\n\\begin{itemize}\n\\item[(i)] $\\widetilde \\theta_{\\tau}$ is invariant under scaling of $\\mathbf u$: $\\widetilde \\theta_{\\tau}(\\zeta \\, \\mathbf u) = \\widetilde \\theta_{\\tau}(\\mathbf u)$ for $\\zeta \\ne 0$.\n\\item[(ii)] For $\\widetilde \\theta_{\\tau}$ as function of scaling of $A$ we have\n$\\widetilde \\theta_{\\tau}(\\zeta \\, A) = \\zeta \\, \\widetilde \\theta_{\\tau \\, \\zeta^{-1}}(A)$ for $\\zeta \\ne 0$.\n\\item[(iii)] For $\\widetilde \\theta_{\\tau}$ as function of shifts of $A$ we have\n$\\widetilde \\theta_{\\tau}(A-\\eta I) = \\widetilde \\theta_{\\tau+\\eta}(A) - \\eta$ for $\\zeta \\in \\mathbb R$.\n\\end{itemize}\n\\end{proposition}\n\\begin{proof}\nThis is easy to check from \\eqref{hrq-tau}.\n\\end{proof}\n\nAs an alternative to \\eqref{opt1b} and \\eqref{opt2b}, we propose and study a new {\\em homogeneous} type of Rayleigh quotient in this paper.\nWe work in the projective space $\\mathbb P(\\mathbb R)$, and replace a real number $\\alpha$ by a quotient employing homogeneous coordinates $(\\alpha_1,\\alpha_2)$:\n\\[\n\\alpha = \\frac{\\alpha_1}{\\alpha_2}, \\qquad \\alpha_1^2+\\alpha_2^2 = 1, \\quad \\alpha_2 \\in [0,1].\n\\]\nNote that the restriction $\\alpha_2 \\in [0,1]$ is without loss of generality. The pair $(1,0)$ corresponds to the point at infinity.\nA homogeneous approach may be seen as elegant and appropriate in our context, for three reasons. \nFirst, it may be seen as the mediator between the two approaches \\eqref{opt1b} and \\eqref{opt2b}; we will come back to this in Section~\\ref{sec:relation}.\nSecond, homogeneous coordinates are useful to deal with infinite values: the harmonic Rayleigh quotient \\eqref{hrq} may be infinite when $\\mathbf u^T\\!A\\mathbf u = 0$. However, this is generally an unwanted situation; we will briefly discuss this matter in Section~\\ref{sec:homo}.\nThird, the homogeneous Rayleigh quotient may be more accurate than its two competitors in some situations. We will give several examples of this in the following sections.\n\nVarious homogeneous techniques for eigenvalue problems have already been developed by various authors.\nStewart and Sun \\cite[p.~283]{SSu90} exploit homogeneous coordinates for the generalized eigenvalue problem $A\\mathbf x = \\lambda B\\,\\mathbf x$, for $B \\in \\mathbb R^{n \\times n}$, since this is ``especially convenient for treating infinite eigenvalues'' \\cite{Ste75}.\nWhile the standard eigenvalue problem $A\\mathbf x = \\lambda \\mathbf x$ does not involve infinite eigenvalues, the harmonic Rayleigh quotient \\eqref{hrq} may be infinite.\nDedieu and Tisseur \\cite{Ded97,DTi03} exploit homogeneous techniques to study perturbation theory for generalized and polynomial eigenproblems.\nInspired by this, a homogeneous Jacobi--Davidson method for subspace expansion has been proposed in \\cite{HNo08}; this may be viewed as an inexact Newton method.\n\nThe rest of the paper is organized as follows. We introduce the new homogeneous Rayleigh quotient in Section~\\ref{sec:homo}, and give a number of key properties. Section~\\ref{sec:sens} studies the sensitivity of the various Rayleigh quotients as function of the approximate eigenvector.\nIn Section~\\ref{sec:gep} we extend the definition of the homogeneous Rayleigh quotient to the generalized eigenvalue problem for the pencil $(A,\\,B)$ with $B$ symmetric positive definite (SPD). We exploit the homogeneous Rayleigh quotient in the context of gradient methods for unconstrained optimization in Section~\\ref{sec:step}. Numerical experiments in Section~\\ref{sec:exp} highlight similarities and differences between the various Rayleigh quotients, both in terms of their sensitivity and their performances as stepsizes in gradient methods. Conclusions are drawn in Section~\\ref{sec:con}.\n\n\\section{A homogeneous Rayleigh quotient} \\label{sec:homo}\nFor a given approximate eigenvector $\\mathbf u$, instead of minimizing $\\|A\\mathbf u-\\gamma \\mathbf u\\|$ or $\\|\\mathbf u-\\gamma^{-1}A\\mathbf u\\|$ (see \\eqref{opt1b} and \\eqref{opt2b}), we consider the minimization of the \\emph{homogeneous} residual:\n\\begin{equation} \\label{homo-sec}\n\\min_{(\\alpha_1,\\,\\alpha_2) \\, \\in \\, \\mathbb P} \\ \\|\\alpha_1 \\, \\mathbf u - \\alpha_2 \\, A\\mathbf u\\| = \\min_{\\alpha_1^2+\\alpha_2^2=1} \\ \\|[\\mathbf u \\ \\ - \\!\\!A\\mathbf u] \\smtxa{c}{\\alpha_1 \\\\ \\alpha_2}\\|.\n\\end{equation}\nWhenever well defined, the ratio between the coordinates of the solution $\\alpha = \\alpha_1\/\\alpha_2$ is what we call the \\emph{homogeneous Rayleigh quotient}.\n\n\\subsection{Key properties of the homogeneous Rayleigh quotient} We discuss some properties of the solution to \\eqref{homo-sec}. First, let us introduce the $n \\times 2$ matrix\n\\[\nC = [\\mathbf u \\ \\ \\, -\\!\\!A\\mathbf u]\n\\]\nand the associated $2 \\times 2$ matrix\n\\begin{equation} \\label{m22}\nC^T C = \\mtxa{cc}{\\phantom-\\mathbf u^T\\mathbf u & -\\mathbf u^T\\!A\\mathbf u \\\\ -\\mathbf u^T\\!A\\mathbf u & \\phantom-\\mathbf u^T\\!A^2\\mathbf u} = \\mathbf u^T\\!A\\mathbf u \\cdot \\mtxa{cc}{\\phantom-\\theta^{-1} & -1 \\\\ -1 & \\phantom-\\widetilde \\theta},\n\\end{equation}\nwhere the second equality holds when $\\mathbf u^T\\!A\\mathbf u$ is nonzero.\nAlthough we usually impose $\\|\\mathbf u\\| = 1$ in the context of this section, it is not convenient to exploit this simplification in view of more general use in Section~\\ref{sec:step}. \n\nIn the next proposition, if we assume that $\\mathbf u^T\\!A\\mathbf u \\ne 0$, we can provide an explicit formula for the homogeneous Rayleigh quotient $\\alpha$. In addition, we show that $\\alpha$ is an eigenvalue of $A$ when $\\mathbf u$ is an eigenvector, and that it is located between the Rayleigh quotient and the harmonic Rayleigh quotient, depending on the sign of $\\mathbf u^T\\!A\\mathbf u$.\n\n\\begin{proposition} \\label{prop:prop}\nSuppose that $\\mathbf u^T\\!A\\mathbf u \\ne 0$ and denote the smallest eigenvalue of $C^TC$ by $\\mu$. Then the following properties hold.\n\\begin{itemize}\n\\item[(i)] The value $\\mu$ is a simple eigenvalue of $C^TC$, or, equivalently, $\\sqrt{\\mu}$ is a simple singular value of $C$; its corresponding eigenvector $\\smtxa{c}{\\alpha_1 \\\\ \\alpha_2}$ is the unique minimizer of \\eqref{homo-sec}. \n\\item[(ii)] The value of $\\mu$ is\n\\begin{align}\n\\mu = \\tfrac12 \\, \\big[\\mathbf u^T\\mathbf u+\\mathbf u^T\\!A^2\\,\\mathbf u - \\sqrt{\\smash[b]{(\\mathbf u^T\\mathbf u-\\mathbf u^T\\!A^2\\,\\mathbf u)^2+4\\,(\\mathbf u^T\\!A\\mathbf u)^2}} \\ \\big]. \\label{mu}\n\\end{align}\n\\item[(iii)] For the homogeneous Rayleigh quotient $\\alpha$ we have\n\\begin{align} \\label{eq:homorq}\n\\alpha &= \\frac{\\alpha_1}{\\alpha_2} = \\frac{\\mathbf u^T\\!A\\mathbf u}{\\mathbf u^T\\mathbf u - \\mu} = \\frac{\\mathbf u^T\\!A^2\\,\\mathbf u - \\mu}{\\mathbf u^T\\!A\\mathbf u}\\nonumber\\\\\n&= \\frac1{2\\ \\mathbf u^T\\!A\\mathbf u} \\ \\big[ \\,\\mathbf u^T\\!A^2\\,\\mathbf u - \\mathbf u^T\\mathbf u + \\sqrt{\\smash[b]{(\\mathbf u^T\\!A^2\\,\\mathbf u - \\mathbf u^T\\mathbf u)^2 + 4\\,(\\mathbf u^T\\!A\\mathbf u)^2}} \\,\\big].\n\\end{align}\nIn terms of the standard and harmonic Rayleigh quotient,\n\\[\n\\alpha = \\tfrac12 \\, \\big[ \\, \\widetilde \\theta-\\theta^{-1} + \\sqrt{\\smash[b]{(\\widetilde \\theta-\\theta^{-1})^2 + 4}} \\, \\big]\n= \\big( \\tfrac12 \\, \\big[ \\, \\theta^{-1}-\\widetilde \\theta + \\sqrt{\\smash[b]{(\\theta^{-1}-\\widetilde \\theta)^2 + 4}} \\, \\big] \\big)^{-1}.\n\\]\n\\item[(iv)] $\\mu=0$ if and only if $\\mathbf u$ is an eigenvector of $A$. If $\\mathbf u$ is an eigenvector, then the corresponding homogeneous Rayleigh quotient $\\alpha$ is the corresponding eigenvalue.\n\\item[(v)] $0 \\le \\mu < \\min(\\mathbf u^T\\mathbf u, \\ \\mathbf u^T\\!A^2\\mathbf u)$.\n\\item[(vi)] We have the following bounds:\n\\begin{equation*}\n \\begin{cases}\n \\theta \\le \\alpha \\le \\widetilde \\theta &\\text{if}\\quad\\mathbf u^T\\!A\\mathbf u > 0, \\\\[0.5mm]\n \\theta \\le \\alpha \\le \\theta &\\text{if}\\quad\\mathbf u^T\\!A\\mathbf u < 0.\n \\end{cases}\n\\end{equation*}\n\\end{itemize}\n\\end{proposition}\n\n\\begin{proof}\nIt is well known that the minimizer of \\eqref{homo-sec} is the eigenvector corresponding to the smallest eigenvalue of the matrix $C^T C$; or, equivalently, to the right singular vector of $C$ corresponding to the smallest singular value.\nWe start from the eigenvalue problem\n\\[\n\\mtxa{cc}{\\phantom-\\mathbf u^T\\mathbf u & -\\mathbf u^T\\!A\\mathbf u \\\\ -\\mathbf u^T\\!A\\mathbf u & \\phantom-\\mathbf u^T\\!A^2\\,\\mathbf u} \\mtxa{c}{\\alpha_1 \\\\ \\alpha_2} = \\mu \\mtxa{c}{\\alpha_1 \\\\ \\alpha_2}.\n\\]\nThe eigenvalue $\\mu$ of \\eqref{m22} is simple because $\\mathbf u^T\\!A\\mathbf u$ is assumed nonzero, so that the discriminant of the quadratic characteristic polynomial is positive. In particular, this implies that the smallest eigenvector $[\\alpha_1, \\ \\alpha_2]^T$ is well defined. The result in part (ii) is an explicit expression for the eigenvalue $\\mu$. Its corresponding eigenvector $[\\alpha_1, \\ \\alpha_2]^T$ is used in item (iii) to give an expression for the ratio $\\alpha = \\alpha_1\/\\alpha_2$, which gives the homogeneous Rayleigh quotient, that can also be written in terms of the standard and the harmonic Rayleigh quotients.\n\nIf $\\mu = 0$, this means that $C$ has rank 1. This happens precisely when $\\mathbf u$ is an eigenvector. Moreover, if $\\mathbf u$ is an eigenvector corresponding to some eigenvalue $\\lambda$, equations \\eqref{mu} and \\eqref{eq:homorq} give $\\mu = 0$ and $\\alpha = \\lambda$ respectively. This concludes the proof of part (iv).\n\nThe nonnegativity of $\\mu$ directly follows from the fact that $C^TC$ is positive semidefinite. Since $\\mathbf u^T\\!A\\mathbf u \\ne 0$ and, as a consequence, $\\sqrt{\\smash[b]{(\\mathbf u^T\\mathbf u-\\mathbf u^T\\!A^2\\,\\mathbf u)^2+4\\,(\\mathbf u^T\\!A\\mathbf u)^2}} > \\vert \\mathbf u^T\\mathbf u-\\mathbf u^T\\!A^2\\,\\mathbf u \\vert$, it follows that\n\\begin{align*}\n \\mathbf u^T\\mathbf u -\\mu &= \\tfrac12 \\, \\big[\\mathbf u^T\\mathbf u-\\mathbf u^T\\!A^2\\,\\mathbf u + \\sqrt{\\smash[b]{(\\mathbf u^T\\mathbf u-\\mathbf u^T\\!A^2\\,\\mathbf u)^2+4\\,(\\mathbf u^T\\!A\\mathbf u)^2}} \\ \\big] > 0, \\\\\n\\mathbf u^T\\!A^2\\mathbf u -\\mu &= \\tfrac12 \\, \\big[\\mathbf u^T\\!A^2\\,\\mathbf u-\\mathbf u^T\\mathbf u + \\sqrt{\\smash[b]{(\\mathbf u^T\\mathbf u-\\mathbf u^T\\!A^2\\,\\mathbf u)^2+4\\,(\\mathbf u^T\\!A\\mathbf u)^2}} \\ \\big] > 0.\n\\end{align*}\nThus $0\\le \\mu < \\min(\\mathbf u^T\\mathbf u, \\mathbf u^T\\!A^2\\mathbf u)$. In particular, this shows that the sign of $\\alpha$ only depends on the quantity $\\mathbf u^T\\!A\\mathbf u$. Given this fact, property (vi) is a direct consequence of part (v).\n\\end{proof}\n\nWe note that in the exceptional case of $\\mathbf u^T\\!A\\mathbf u = 0$, the matrix $C^TC$ is diagonal, with zero Rayleigh quotient and infinite harmonic Rayleigh quotient.\nThe homogeneous Rayleigh quotient $\\alpha = (\\alpha_1, \\alpha_2)$ is either zero (that is, $(0,1)$), or the point at infinity $(1,0)$.\nThe situation $\\mathbf u^T\\!A\\mathbf u = 0$ cannot happen if $A$ is definite, such as for quadratic optimization problems with a symmetric positive definite Hessian; see also Section~\\ref{sec:step}.\nMoreover, $\\mathbf u^T\\!A\\mathbf u = 0$ may not be a very common situation in the context of eigenproblems, since this means that either we are approximating a zero eigenvalue or $\\mathbf u$ is a poor approximate eigenvector.\n\nLet us study some consequences of Proposition~\\ref{prop:prop}. First, it is easy to see (for instance, from \\eqref{eq:homorq}) that the solution to the minimization problem \\eqref{homo-sec} is invariant under nonzero scaling of $\\mathbf u$: as function of $\\mathbf u$, we have $\\alpha(\\zeta \\, \\mathbf u) = \\alpha(\\mathbf u)$ for $\\zeta \\ne 0$.\nSecond, while the standard and harmonic Rayleigh quotients are invariant under multiplications $A \\to \\zeta A$ (cf. Proposition~\\ref{prop:hrq}(ii), with $\\tau = 0$), there is no easy relation between $\\alpha(\\zeta A)$ and $\\alpha(A)$. However, in view of Proposition~\\ref{prop:prop}(vi), we establish these bounds:\n\\begin{equation*}\n \\begin{cases}\n \\zeta \\, \\theta(A) \\le \\alpha(\\zeta A) \\le \\zeta \\, \\widetilde \\theta(A) &\\text{if}\\quad\\mathbf u^T\\!A\\mathbf u > 0, \\\\[0.5mm]\n \\zeta \\, \\widetilde \\theta(A) \\le \\alpha(\\zeta A) \\le \\zeta \\, \\theta(A) &\\text{if}\\quad\\mathbf u^T\\!A\\mathbf u < 0.\n \\end{cases}\n\\end{equation*}\nThe same upper and lower bounds hold for $\\zeta\\alpha(A)$. Therefore, if the interval $[\\zeta\\theta(A),\\, \\zeta\\widetilde \\theta(A)]$ (or $[\\widetilde\\zeta\\theta(A),\\, \\zeta\\theta(A)]$) is small, we conclude that $\\alpha(\\zeta A) \\approx \\zeta \\alpha(A)$. \n\nOn the size of this interval, we note that when $\\theta \\le \\widetilde \\theta$ (which for instance holds for SPD cases), the ``relative size'' of the interval $[\\theta,\\, \\widetilde \\theta]$ can be expressed in terms of the angle between $\\mathbf u$ and $A\\mathbf u$:\n\\[\n\\frac{\\widetilde \\theta-\\theta}{\\theta} = \\frac{1}{\\cos^2(A\\mathbf u, \\mathbf u)}-1 = \\tan^2(A\\mathbf u, \\mathbf u).\n\\]\nThis means that if $\\angle(A\\mathbf u, \\mathbf u)$ is small, the interval is small, and all three Rayleigh quotients will be close, although Example~\\ref{ex:smallex} will show that one can still be more accurate than the other two.\n\nMoreover, we have\n\\[\n\\frac{\\alpha - \\theta}{\\theta} = \\tfrac12 \\, \\big[ \\, 1 + \\tan^2(A\\mathbf u, \\mathbf u)-\\theta^{-2} + \\sqrt{\\smash[b]{(1 + \\tan^2(A\\mathbf u, \\mathbf u)-\\theta^{-2})^2 + 4\\,\\theta^{-2}}} \\, - 2\\big].\n\\]\nFor $\\theta^{-1} \\ll \\tan(A\\mathbf u,\\mathbf u)$ (i.e., for large eigenvalues), this expression is close to $\\tan^2(A\\mathbf u, \\mathbf u)$, which means that $\\alpha \\approx \\widetilde \\theta$.\nIn contrast, if $\\theta^{-1} \\gg \\max(\\tan(A\\mathbf u,\\mathbf u), \\, 1)$ (i.e., for small eigenvalues), one may check that $\\frac{\\alpha-\\theta}{\\theta} = \\mathcal O(\\theta^2) \\approx 0$, which implies that $\\alpha \\approx \\theta$.\nSimilar remarks can be made when $\\widetilde \\theta < \\theta$.\n\nSimple manipulations of Proposition~\\ref{prop:prop}(vi) also lead to some relations between the relative errors of the three Rayleigh quotients, when the estimated eigenvalues are either $\\lambda_1$ or $\\lambda_n$. If $A$ is SPD, then both the standard and harmonic Rayleigh quotients lie in the spectrum of $A$, i.e., $\\theta,\\,\\widetilde\\theta \\in [\\lambda_1,\\lambda_n]$. Thus the following inequalities hold:\n\\begin{equation}\n\\label{eq:sens-spd}\n \\frac{\\theta - \\lambda_1}{\\lambda_1} \\le \\frac{\\alpha - \\lambda_1}{\\lambda_1} \\le \\frac{\\widetilde\\theta - \\lambda_1}{\\lambda_1}, \\qquad \\frac{\\lambda_n -\\widetilde\\theta}{\\lambda_n} \\le \\frac{\\lambda_n -\\alpha}{\\lambda_n} \\le \\frac{\\lambda_n -\\theta}{\\lambda_n}.\n\\end{equation}\nWe conclude that, in the SPD case, the Rayleigh quotient is more accurate when we estimate the smallest eigenvalue of $A$, while the harmonic Rayleigh quotient is more accurate for the largest eigenvalue. The homogeneous Rayleigh quotient lies in between.\n\nFor indefinite matrices, while $\\theta\\in [\\lambda_1,\\lambda_n]$, the harmonic and the homogeneous Rayleigh quotients can lie outside the spectrum. Example~\\ref{ex:smallex} shows that the harmonic and homogeneous Rayleigh quotients may overestimate the largest eigenvalues, and the sensitivity of the homogeneous Rayleigh quotient may be smaller than the other two. In particular, \\eqref{eq:sens-spd} does not hold if $A$ is not SPD.\n\n\\begin{example}\n\\label{ex:smallex}\n\\rm Let $A = \\text{diag}(-\\frac23, \\ \\frac13, \\ 2)$, and consider the eigenvector $\\mathbf x = [0, \\, 0, \\, 1]^T$ corresponding to $\\lambda_3 = 2$. Given $\\mathbf u = [-0.02, \\ 0.01, \\ 1]^T$ as approximation to $\\mathbf x$, with $\\angle(\\mathbf u, \\mathbf x) \\approx 2.2 \\cdot 10^{-2}$, we have that\n\\[\n|\\theta(\\mathbf u) - \\lambda_3| \\approx 1.2 \\cdot 10^{-3}, \\qquad\n|\\widetilde \\theta(\\mathbf u) - \\lambda_3| \\approx 3.3 \\cdot 10^{-4}, \\qquad\n|\\alpha(\\mathbf u) - \\lambda_3| \\approx 1.6 \\cdot 10^{-5}.\n\\]\nThe quotients are $\\theta(\\mathbf u) \\approx 1.9988 < \\lambda_3$, $\\widetilde \\theta(\\mathbf u) \\approx 2.0003 > \\lambda_3$ and $\\alpha(\\mathbf u) \\approx 2.00002 > \\lambda_3$, so the harmonic and the homogeneous Rayleigh quotients overestimate $\\lambda_3$. Nevertheless, the homogeneous Rayleigh quotient is an order of magnitude more accurate than the harmonic Rayleigh quotient, and two orders more accurate than the standard Rayleigh quotient.\n\\end{example}\n\nWhen the wanted eigenvalue is in the interior of the spectrum, or if $A$ is indefinite, it is more difficult to derive general conclusions about the ordering of the relative errors. We will discuss this in more details in Section~\\ref{sec:sens}.\n\n\\subsection{A Galerkin condition for the homogeneous Rayleigh quotient}\nIn view of \\eqref{opt1a} and \\eqref{opt2a}, the question arises whether there exists a Galerkin (orthogonality) condition based on the span of $\\mathbf u$ and $A\\mathbf u$ for the homogeneous approach. Let us first express the homogeneous Rayleigh quotient as a solution to a quadratic equation. As a secondary result, this property connects the homogeneous Rayleigh quotient and the harmonic Rayleigh quotient with target. \n\\begin{proposition}\\label{prop:quadeq}\nLet $\\mathbf u^T\\!A\\mathbf u \\ne 0$. The homogeneous Rayleigh quotient \\eqref{eq:homorq} solves\n\\begin{equation}\n\\label{eq:homopoly}\n (\\mathbf u^T\\!A\\mathbf u) \\, \\alpha^2 + (\\mathbf u^T\\mathbf u-\\mathbf u^T\\!A^2\\mathbf u) \\, \\alpha - (\\mathbf u^T\\!A\\mathbf u) = 0\n\\end{equation}\nand satisfies $\\alpha\\ \\mathbf u^T\\!A\\mathbf u > 0$. In addition, the homogeneous Rayleigh quotient is a harmonic Rayleigh quotient with target \\eqref{hrq-tau}, where $\\tau = -\\alpha^{-1}$.\n\\end{proposition}\n\\begin{proof}\nIt is straightforward to check that \\eqref{eq:homorq} is a solution to \\eqref{eq:homopoly}. The solutions to \\eqref{eq:homopoly} have opposite signs, since their product is $-1$. Given that $0\\le \\mu < \\min(\\mathbf u^T\\mathbf u, \\mathbf u^T\\!A^2\\mathbf u)$ from Proposition~\\ref{prop:prop}, the homogeneous Rayleigh quotient shares the same sign as $\\mathbf u^T\\!A\\mathbf u$, thus it corresponds to the solution to \\eqref{eq:homopoly} for which $\\alpha\\ \\mathbf u^T\\!A\\mathbf u > 0$.\n\nThe second part follows immediately from the fact that imposing $\\widetilde\\theta_{-\\alpha^{-1}} = \\alpha$ is equivalent to solving \\eqref{eq:homopoly}.\n\\end{proof}\nAnother equivalent point of view on this proposition is the following.\nEquation \\eqref{eq:homopoly} can also be expressed as\n\\begin{equation}\n\\label{eq:homogal}\n \\mathbf u^T(A + \\alpha^{-1} I)(A-\\alpha I)\\,\\mathbf u = 0.\n\\end{equation}\nTherefore, we have the Galerkin condition $(A-\\alpha I)\\,\\mathbf u \\perp (A+\\alpha^{-1} I)\\, \\mathbf u$, or, in homogeneous coordinates, \n\\[\n(\\alpha_2 A-\\alpha_1 I) \\, \\mathbf u \\perp (\\alpha_1 A+ \\alpha_2 I) \\, \\mathbf u.\n\\]\nThis again highlights that $\\alpha$ can be viewed as a harmonic Rayleigh quotient with $-\\alpha^{-1}$ as target.\n\n\\subsection{Connection with standard and harmonic minimal residual conditions}\n\\label{sec:relation}\nThere is a connection between the two minimal residual conditions of the standard and harmonic Rayleigh quotients (\\eqref{opt1b} and \\eqref{opt2b}) and the quadratic equation \\eqref{eq:homopoly}. First notice that another formulation of \\eqref{eq:homopoly} is\n\\begin{equation} \\label{aainv}\n(\\mathbf u^T\\!A\\mathbf u) \\, \\alpha + (\\mathbf u^T\\mathbf u - \\mathbf u^T\\!A^2\\mathbf u) - (\\mathbf u^T\\!A\\mathbf u) \\, \\alpha^{-1} = 0,\n\\end{equation}\nfor $\\alpha \\ne 0$ (which corresponds to $\\mathbf u^T\\!A\\mathbf u \\ne 0$). The stationarity conditions for \\eqref{opt1b} and \\eqref{opt2b}, respectively, can be stated as\n\\[\n\\mathbf u^T\\mathbf u - (\\mathbf u^T\\!A\\mathbf u) \\, \\gamma^{-1} = 0, \\qquad (\\mathbf u^T\\!A\\mathbf u) \\, \\gamma - \\mathbf u^T\\!A^2\\mathbf u = 0.\n\\]\nThis means that \\eqref{aainv} is a linear combination of the two stationarity conditions of the standard and the harmonic Rayleigh quotients. This fact, together with the bounds derived in Proposition~\\ref{prop:prop}(vi), lets us interpret the homogeneous Rayleigh quotient as a ``mediator'' between the standard and the harmonic Rayleigh quotient.\nWe have seen this already in Proposition~\\ref{prop:prop}(vi), and experiments in Section~\\ref{sec:exp} will also highlight this mediating behavior. It is an open question whether it is possible to relate the objective function \\eqref{homo-sec} to a combination of the two residuals associated with the standard and the harmonic Rayleigh quotients.\n\n\\section{Sensitivity analysis} \\label{sec:sens}\nWe now study the sensitivity of the various Rayleigh quotients with respect to perturbation in the approximate eigenvector $\\mathbf u$.\nNote that this sensitivity is different from that expressed by the condition number $\\kappa(\\lambda)$ of an eigenvalue, which is related to its perturbation as function of changes in the matrix $A$. We note that for a simple eigenvalue $\\lambda$ of a symmetric $A$, it holds that $\\kappa(\\lambda) = 1$; this means that the eigenvalue is perfectly conditioned (see, e.g., \\cite[p.~16]{Par98}).\n\nStudying the sensitivity with respect to the approximate eigenvector for the standard and harmonic Rayleigh quotient is certainly not new (cf., e.g., \\cite{sleijpen2003use,sleijpen2003optimal} and the references therein), but we derive a new result for the homogeneous Rayleigh quotient, and obtain expressions that enables an easy comparison of the three Rayleigh quotients. We will also comment on the differences between existing results and ours.\n\n\\subsection{Rayleigh quotient and harmonic Rayleigh quotient}\nAs ansatz, suppose $\\mathbf u=\\mathbf x+\\mathbf e$ is an approximate eigenvector, where $\\|\\mathbf x\\| = 1$, $\\mathbf e \\perp \\mathbf x$, and $\\varepsilon := \\|\\mathbf e\\|$ is small.\nThen the perturbation of the standard Rayleigh quotient is\n\\begin{equation} \\label{rq-sens}\n\\theta(\\mathbf u) = \\frac{\\mathbf u^T\\!A\\mathbf u}{\\mathbf u^T\\mathbf u} = \\frac{\\lambda + \\mathbf e^T\\!A\\mathbf e}{1+\\mathbf e^T\\mathbf e}\n= \\lambda \\, (1 + \\mathbf e^T (\\lambda^{-1} A-I)\\,\\mathbf e) + \\mathcal O(\\varepsilon^4),\n\\end{equation}\nwhere we use the fact that $(1+t)^{-1} = 1 - t + \\mathcal O(t^2)$ for $t \\to 0$.\nFor the harmonic Rayleigh quotient we have\n\\begin{align}\n\\widetilde \\theta(\\mathbf u) = \\frac{\\mathbf u^T\\!A^2\\,\\mathbf u}{\\mathbf u^T\\!A\\mathbf u} & = \\frac{\\lambda^2+\\mathbf e^T\\!A^2\\mathbf e}{\\lambda+\\mathbf e^T\\!A\\mathbf e}\n= \\lambda \\, (1+\\lambda^{-2} \\, \\mathbf e^T\\!A^2\\mathbf e) \\, (1-\\lambda^{-1} \\, \\mathbf e^T\\!A\\mathbf e) + \\mathcal O(\\varepsilon^4) \\nonumber \\\\[1mm]\n& = \\lambda \\, (1 + \\mathbf e^T \\lambda^{-1}A \\, (\\lambda^{-1} A-I)\\,\\mathbf e) + \\mathcal O(\\varepsilon^4). \\label{hrq-sens}\n\\end{align}\nFor the harmonic Rayleigh quotient with target $\\tau$ the expression becomes\n\\begin{equation} \\label{hrq-tau-sens}\n\\widetilde \\theta_\\tau(\\mathbf u) = \\lambda \\, (1 + \\mathbf e^T \\, (\\lambda-\\tau)^{-1} (A-\\tau I) \\, (\\lambda^{-1} A-I)\\,\\mathbf e) + \\mathcal O(\\varepsilon^4).\n\\end{equation}\nIn particular, we note that $\\theta(\\mathbf x) = \\widetilde \\theta(\\mathbf x) = \\widetilde \\theta_\\tau(\\mathbf x) = \\lambda$. We also point out that we have studied similar expressions for {\\em inverse} Rayleigh quotients (as stepsizes for gradient methods) in \\cite{FHK22}.\n\nTo derive the lower and upper bounds on the sensitivity of the approximate eigenvalues, we make use of this standard result for symmetric operators.\n\n\\begin{lemma}\n\\label{lemma}\nSuppose $\\mathbf u = \\mathbf x + \\mathbf e$ is an approximate eigenvector corresponding to a simple eigenvalue $\\lambda$, with $\\|\\mathbf x\\|=1$, $\\mathbf e \\perp \\mathbf x$, and $\\varepsilon = \\|\\mathbf e\\|$, of a symmetric $A$. Let $p$ be a polynomial. Then\n\\[\n\\varepsilon^2\\,\\min_{\\lambda_i \\ne \\lambda} \\ |p(\\lambda_i)| \\ \\le \\ |\\mathbf e^T \\!p(A) \\, \\mathbf e| \\ \\le \\ \\varepsilon^2 \\,\\max_{\\lambda_i \\ne \\lambda} \\ |p(\\lambda_i)|.\n\\]\n\\end{lemma}\n\\begin{proof}\nSince $A$ is symmetric, it is diagonalizable by an orthogonal transformation, and we may assume that $A = \\text{diag}(\\lambda_1, \\dots, \\lambda_n)$.\nWith the notation $\\lambda = \\lambda_j$, it follows that $|\\mathbf e^T\\!p(A)\\mathbf e| = |\\sum_{i\\ne j} e_i^2\\, p(\\lambda_i)|$, from which the result follows easily. \n\\end{proof}\nThe above expressions imply the following lower and upper bounds on the sensitivity of the approximate eigenvalues as function of the approximate eigenvector.\nThe assumption $\\lambda \\ne 0$ in the next proposition may be viewed as non-restrictive: for zero eigenvalues we can consider the absolute error $|\\theta(\\mathbf u)-\\lambda|$ instead.\n\n\\medskip\n\\begin{proposition} \\label{prop:1}\nSuppose $\\mathbf u = \\mathbf x + \\mathbf e$ is an approximate eigenvector corresponding to a simple eigenvalue $\\lambda \\ne 0$, with $\\|\\mathbf x\\|=1$, $\\mathbf e \\perp \\mathbf x$, and $\\varepsilon = \\|\\mathbf e\\|$, of a symmetric $A$. Then, up to $\\mathcal O(\\varepsilon^4)$-terms, for the sensitivity of the Rayleigh quotient (as function of $\\mathbf u$) it holds that\n\\begin{equation} \\label{eq:brq}\n \\min_{\\lambda_i \\ne \\lambda} \\frac{|\\lambda_i-\\lambda|}{|\\lambda|} \\ \\varepsilon^2 \\ \\lesssim \\\n\\frac{|\\theta(\\mathbf u)-\\lambda|}{|\\lambda|}\n\\ \\lesssim \\ \\max_{\\lambda_i \\ne \\lambda} \\frac{|\\lambda_i-\\lambda|}{|\\lambda|} \\ \\varepsilon^2.\n\\end{equation}\nIf $\\tau$ is not equal to an eigenvalue, then for the general harmonic Rayleigh quotient $\\widetilde \\theta_\\tau$:\n\\begin{equation}\n \\label{eq:bhrq}\n \\min_{\\lambda_i \\ne \\lambda} \\frac{\\vert(\\lambda_i-\\tau)(\\lambda_i-\\lambda)\\vert}{\\vert\\lambda(\\lambda-\\tau)\\vert}\\, \\varepsilon^2 \\ \\lesssim \\\n\\frac{|\\widetilde \\theta_\\tau(\\mathbf u)-\\lambda|}{|\\lambda|}\n\\ \\lesssim \\ \\max_{\\lambda_i \\ne \\lambda} \\frac{\\vert(\\lambda_i-\\tau)(\\lambda_i-\\lambda)\\vert}{\\vert\\lambda(\\lambda-\\tau)\\vert}\\, \\varepsilon^2.\n\\end{equation}\nIn particular, for the sensitivity of the harmonic Rayleigh quotient (i.e., $\\tau=0$) we have \n\\[\n\\min_{\\lambda_i \\ne \\lambda} \\frac{\\vert\\lambda_i(\\lambda_i-\\lambda)\\vert}{\\lambda^2}\\, \\varepsilon^2\\ \\lesssim \\\n\\frac{|\\widetilde \\theta(\\mathbf u)-\\lambda|}{|\\lambda|}\n\\ \\lesssim \\ \\max_{\\lambda_i \\ne \\lambda} \\frac{\\vert\\lambda_i(\\lambda_i-\\lambda)\\vert}{\\lambda^2}\\, \\varepsilon^2.\n\\]\n\\end{proposition}\n\\begin{proof}\nThis follows from \\eqref{rq-sens}, \\eqref{hrq-sens}, and \\eqref{hrq-tau-sens}, using Lemma~\\ref{lemma} with $p(t) = t-\\lambda$ for \\eqref{rq-sens} and $p(t) = t\\,(t-\\lambda)$ for \\eqref{hrq-tau-sens}. The bound for \\eqref{rq-sens} is a specific case of \\eqref{hrq-sens}, when $\\tau = 0$.\n\\end{proof}\n\nWe mention that the bound \\eqref{eq:brq} has a slightly improved version, as follows (cf., e.g., \\cite[Thm.~2.1]{sleijpen2003optimal} for the smallest eigenvalue).\nFor this context we introduce $\\widetilde\\mathbf e = \\frac{\\mathbf e}{\\|\\mathbf e\\|}$ of unit length, $\\mathbf u = \\mathbf x + \\mathbf e$ as before, and $\\widetilde\\mathbf u = \\frac{\\mathbf u}{\\|\\mathbf u\\|}$, where $\\|\\mathbf u\\|^2 = 1+\\varepsilon^2$.\nWe can decompose $\\widetilde\\mathbf u = \\cos(\\widetilde\\mathbf u,\\mathbf x) \\, \\mathbf x + \\sin(\\widetilde\\mathbf u,\\mathbf x) \\, \\widetilde\\mathbf e$. Then an easy computation gives $\\widetilde\\mathbf u^T\\!A\\widetilde\\mathbf u-\\lambda = \\sin^2(\\widetilde\\mathbf u,\\mathbf x) \\ \\widetilde\\mathbf e^T\\!(A-\\lambda I)\\,\\widetilde\\mathbf e$ and therefore \n\\begin{equation} \\label{sharp-sin2}\n|\\widetilde\\mathbf u^T\\!A\\widetilde\\mathbf u-\\lambda| \\le \\max_{\\lambda_i \\ne \\lambda} |\\lambda_i-\\lambda| \\, \\sin^2(\\widetilde\\mathbf u,\\mathbf x).\n\\end{equation}\nTo connect the approximate bound \\eqref{eq:brq} to \\eqref{sharp-sin2}, we note that $\\sin^2(\\widetilde\\mathbf u,\\mathbf x) = \\frac{\\varepsilon^2}{1+\\varepsilon^2}$, which is asymptotically equal to $\\varepsilon^2$. The bound \\eqref{sharp-sin2} is exact (i.e., not a first-order bound as in Proposition~\\ref{prop:1}) and sharp, but it is asymptotically equal to our expression.\n\nAn exact bound for the error of the harmonic Rayleigh quotient with target is provided in \\cite[Thm.~5.2]{sleijpen2003use}. Under certain assumptions mentioned in \\cite{sleijpen2003use} for the target $\\tau$ and $\\varepsilon$, with $\\mathbf x = \\mathbf x + \\varepsilon\\,\\widetilde\\mathbf e$, it is shown that\n\\[\n\\frac{\\mathbf u^TA(A-\\tau I)\\,\\mathbf u}{\\mathbf u^T(A-\\tau I)\\,\\mathbf u} - \\lambda = \\varepsilon^2\\frac{\\widetilde\\mathbf e^T(A-\\lambda I)(A - \\tau I)\\,\\widetilde\\mathbf e}{(\\lambda - \\tau) + \\varepsilon^2 \\, (\\widetilde\\mathbf e^TA\\widetilde\\mathbf e - \\tau)} \\le \\varepsilon^2 \\, \\max_{\\lambda_i \\ne \\lambda}\\frac{(\\lambda_i-\\lambda)(\\lambda_i - \\tau)}{(\\lambda - \\tau) + \\varepsilon^2(\\lambda_i - \\tau)}.\n\\]\nSince we do not make any assumptions on the target or the value of $\\|\\mathbf e\\|$, our upper bound \\eqref{eq:bhrq} includes absolute values. Discarding the $\\varepsilon^2$-term in the denominator yields an approximate upper bound accurate to $\\mathcal O(\\varepsilon^4)$-terms. In conclusion, \\eqref{eq:bhrq} is less sharp in some situations but more general, and asymptotically the same as the result of \\cite{sleijpen2003use}. Another good reason to consider \\eqref{eq:brq} and \\eqref{eq:bhrq} is that approximate bounds for the error of the homogeneous Rayleigh quotient can be obtained via the first-order approximation of the quotient, as we have done in Proposition~\\ref{prop:1}. At the end of this section, we will also point out that the three approximate bounds have the same form.\n\n\\subsection{Homogeneous Rayleigh quotient} \n\\label{sec:homobounds}\nWe now determine approximate bounds for the sensitivity of the homogeneous Rayleigh quotient.\n\\begin{proposition}\nSuppose $\\mathbf u = \\mathbf x + \\mathbf e$ is an approximate eigenvector corresponding to a simple eigenvalue $\\lambda \\ne 0$, with $\\|\\mathbf x\\|=1$, $\\mathbf e \\perp \\mathbf x$, and $\\varepsilon = \\|\\mathbf e\\|$, of a symmetric $A$. Then, up to $\\mathcal O(\\varepsilon^4)$-terms, for the sensitivity of the homogeneous Rayleigh quotient (as function of $\\mathbf u$) it holds that:\n\\[\n\\varepsilon^2 \\, \\frac{1}{\\lambda^2+1} \\, \\min_{\\lambda_i \\ne \\lambda} \\vert(\\lambda_i-\\lambda)(\\lambda_i+\\lambda^{-1})\\vert \\ \\lesssim \\ \\frac{|\\alpha(\\mathbf u)-\\lambda|}{|\\lambda|}\n\\ \\lesssim \\ \\varepsilon^2 \\, \\frac{1}{\\lambda^2+1} \\, \\max_{\\lambda_i \\ne \\lambda} \\vert(\\lambda_i-\\lambda)(\\lambda_i+\\lambda^{-1})\\vert.\n\\]\n\\end{proposition}\n\\begin{proof}\nFrom Proposition~\\ref{prop:prop}(iii), the homogeneous Rayleigh quotient can be written as\n\\[\n\\alpha = \\sqrt{\\aux^2+1} - \\aux, \\qquad \\aux = \\frac{1}{2\\ \\mathbf u^T\\!A\\mathbf u} \\, (\\mathbf u^T\\mathbf u-\\mathbf u^T\\!A^2\\mathbf u).\n\\]\nWe can express $\\aux$ in terms of $\\lambda$ and $\\mathbf e$ as\n\\begin{equation*}\n\\aux = \\frac{1-\\lambda^2 + \\|\\mathbf e\\|^2 - \\|A\\mathbf e\\|^2}{2\\,(\\lambda + \\mathbf e^TA\\mathbf e)}.\n\\end{equation*}\nFurthermore, when $\\|\\mathbf e\\| = \\varepsilon \\to 0$, \n\\[\n\\aux = \\tfrac12 \\, \\lambda^{-1} \\, (1 - \\lambda^{-1} \\, \\mathbf e^T\\!A\\mathbf e)\\,(1 - \\lambda^2 + \\|\\mathbf e\\|^2 -\\|A\\mathbf e\\|^2 )+ \\mathcal O(\\varepsilon^4).\n\\]\nFor the square root, we use the first-order approximation $\\sqrt{1+t} = 1 + \\frac12 t + \\mathcal O(t^2)$. Discarding $\\mathcal O(\\varepsilon^4)$-terms yields\n\\begin{align*}\n\\sqrt{\\aux^2+1} &= \\tfrac12 \\, \\lambda^{-1} \\, (1 + \\lambda^{-1} \\, \\mathbf e^T\\!A\\mathbf e)^{-1} \\sqrt{(1-\\lambda^2 + \\|\\mathbf e\\|^2 - \\|A\\mathbf e\\|^2)^2 + 4\\,(\\lambda + \\mathbf e^TA\\mathbf e)^2} \\\\\n&= \\tfrac12 \\lambda^{-1} (1 - \\lambda^{-1} \\mathbf e^TA\\mathbf e) \\Big(1 + \\lambda^2 + \\frac{1-\\lambda^2}{1+\\lambda^2}(\\|\\mathbf e\\|^2 -\\|A\\mathbf e\\|^2) + \\frac{4\\lambda}{1+\\lambda^2}\\,\\mathbf e^T\\!A\\mathbf e \\Big).\n\\end{align*}\nSimple computations show that\n\\begin{align}\n\\label{eq:homoapprox}\n\\alpha & = (1 - \\lambda^{-1} \\mathbf e^TA\\mathbf e) \\Big(\\lambda -\\frac{\\lambda}{\\lambda^2+1}(\\|\\mathbf e\\|^2 -\\|A\\mathbf e\\|^2) +\\frac{2}{\\lambda^2+1}\\mathbf e^T\\!A\\mathbf e \\Big) + \\mathcal O(\\varepsilon^4)\\nonumber\\\\\n& = \\lambda +\\frac{\\lambda}{\\lambda^2+1} \\, \\mathbf e^T(A + \\lambda^{-1}I)(A-\\lambda I) \\, \\mathbf e + \\mathcal O(\\varepsilon^4).\n\\end{align}\nThe thesis follows from Lemma~\\ref{lemma}, with the polynomial $p(t) = (t + \\lambda^{-1})(t - \\lambda)$.\n\nAn alternative method to derive this result is via the Implicit Function Theorem. From \\eqref{eq:homogal}, for the pair $(\\alpha(\\mathbf u), \\mathbf u)$, it holds that $F(\\alpha(\\mathbf u), \\mathbf u) := \\mathbf u^T (A-\\alpha I)(A+\\alpha^{-1}I) \\, \\mathbf u = 0$. Then we know that\n\\[\n\\nabla \\alpha(\\mathbf u) = -\\Big( \\frac{\\partial F}{\\partial \\alpha} \\Big)^{-1} \\, \\frac{\\partial F}{\\partial \\mathbf u} \\\\\n= 2 \\ [\\mathbf u^T(A+\\alpha^{-2}A)\\,\\mathbf u]^{-1} \\ (A-\\alpha I)(A+\\alpha^{-1} I)\\, \\mathbf u.\n\\]\nSince $\\nabla\\alpha(\\mathbf x) = \\mathbf 0$, we consider the second-order approximation of the homogeneous Rayleigh quotient $\\alpha(\\mathbf x+\\mathbf e) \\approx \\alpha(\\mathbf x) + \\frac12 \\, \\mathbf e^T \\nabla^2 \\alpha(\\mathbf x) \\, \\mathbf e$.\nThe action of $\\nabla^2 \\alpha(\\mathbf x)$ can be obtained from the first-order approximation of the gradient in $\\mathbf x$:\n\\begin{align*}\n\\nabla^2\\alpha(\\mathbf x)\\,\\mathbf e & = \\nabla \\alpha(\\mathbf x+\\mathbf e) - \\nabla\\alpha(\\mathbf x) + \\mathcal O(\\varepsilon^2) \\\\\n& = \\frac{2 \\, \\lambda}{1+\\lambda^2} \\ (A-\\lambda I)(A+\\lambda^{-1} I)\\,\\mathbf e + \\mathcal O(\\varepsilon^2).\n\\end{align*}\nThis leads to the expression in \\eqref{eq:homoapprox}.\n\\end{proof}\n\nInterestingly, by replacing $\\tau = -\\lambda^{-1}$ in the approximation of the harmonic Rayleigh quotient with target \\eqref{hrq-tau-sens}, we get the approximation \\eqref{eq:homoapprox} for the homogeneous Rayleigh quotient.\n\nWe now compare the different first-order bounds. With a little abuse of notation, let $\\theta(\\mathbf u)$ be any of the three Rayleigh quotients in $\\mathbf u$. Then the previous results can be summarized by \n\\begin{equation*}\n \\frac{\\varepsilon^2}{\\vert\\lambda\\vert}\\ \\min_{\\lambda_i \\ne \\lambda} |p_\\lambda(\\lambda_i)| \\ \\ \\lesssim \\ \\\n\\frac{|\\theta(\\mathbf u)-\\lambda|}{|\\lambda|}\n\\ \\ \\lesssim \\ \\ \\frac{\\varepsilon^2}{\\vert\\lambda\\vert}\\ \\max_{\\lambda_i \\ne \\lambda} |p_\\lambda(\\lambda_i)|, \n\\end{equation*}\nwhere\n\\begin{equation}\n\\label{eq:g}\n p_\\lambda(t) = \n \\begin{cases}\n t-\\lambda & \\text{for the Rayleigh quotient,} \\\\\n \\frac{1}{\\lambda} t(t-\\lambda) & \\text{for the harmonic Rayleigh quotient,} \\\\\n \\frac{1}{\\lambda^2+1} (t-\\lambda)(\\lambda t + 1) & \\text{for the homogeneous Rayleigh quotient.}\n \\end{cases}\n\\end{equation}\nFor the Rayleigh quotient, it is easy to see that $\\max_{\\lambda_i \\ne \\lambda}\\vert\\lambda_i - \\lambda\\vert \\in \\{\\vert\\lambda - \\lambda_1\\vert, \\vert\\lambda - \\lambda_n\\vert\\}$. In contrast, the upper bounds for the harmonic Rayleigh quotient or the homogeneous Rayleigh quotient are more complicated. Viewing $p_\\lambda$ as a continuous function on $[\\lambda_1, \\lambda_n]$, the maximum of the corresponding two upper bounds may be attained at the vertex of the parabola $p_\\lambda$, or at the boundary points $\\lambda_1$ and $\\lambda_n$. Thus, in the discrete case, we know that\n\\begin{equation}\n\\label{eq:maxmax}\n \\max_{\\lambda_i \\ne \\lambda} |p_\\lambda(\\lambda_i)| \\le \\max_{\\lambda_i \\in \\{\\lambda_1, \\lambda_n, \\widetilde \\lambda\\}} |p_\\lambda(\\lambda_i)|,\n\\end{equation}\nwhere $\\widetilde\\lambda = \\frac12 \\lambda$ for the harmonic Rayleigh quotient, while $\\widetilde \\lambda = \\frac12 (\\lambda - \\lambda^{-1})$ for the homogeneous Rayleigh quotient. Although $p_\\lambda(t)$ always has a factor of $(t-\\lambda)$, due to the quadratic nature of the upper bound for the harmonic and the homogeneous Rayleigh quotient, we cannot make any a priori comparison with the bound for the Rayleigh quotient: without additional information about the spectrum, we cannot improve on these considerations.\n\nTo show the possible types of behavior, we consider a spectrum of $100$ random eigenvalues with a uniform distribution in $[0,2]$, i.e., $\\lambda_i\\sim U(0,2)$, and $100$ eigenvalues normally distributed with mean $0$ and standard deviation $1$, i.e., $\\lambda_i\\sim\\mathcal{N}(0,1)$. Similar spectra will be considered in the experiments in Section~\\ref{sec:exp}. In Figure~\\ref{fig:polys} we plot the functions $|p_{\\lambda}(t)|$ in \\eqref{eq:g} for four values of $\\lambda$, corresponding to position $25$, $50$, $75$, and $100$ in the ordered spectrum. We immediately notice that $\\arg\\!\\max_{\\lambda_i \\ne \\lambda} |p_\\lambda(\\lambda_i)|$ can be close to the vertex of the two parabolas, for both the homogeneous and the harmonic Rayleigh quotient, in the uniformly distributed spectrum. \n\\begin{figure}[ht] \n\\centering\n \\subfloat[Eigenvalues drawn from $\\mathcal{N}(0,1)$.]{\n \\includegraphics[width = 0.85\\textwidth]{Figure1.pdf}}\n \n \\subfloat[Eigenvalues drawn from $U(0,2)$.]{\n \\includegraphics[width = 0.85\\textwidth]{Figure2.pdf}}\n \\caption{The functions $|p_\\lambda|$ in \\eqref{eq:g} for different values of $\\lambda$, from different spectra. The value $\\max_{\\lambda_i\\ne \\lambda} |p_\\lambda(\\lambda_i)|$ is indicated by an asterisk.}\n \\label{fig:polys}\n\\end{figure}\n\nIn Figure~\\ref{fig:bounds} we compare the upper bounds of the three Rayleigh quotients, for all the eigenvalues in the two spectra. \n\\begin{figure}[ht] \n\\centering\n \\includegraphics[width = 0.55\\textwidth]{Figure3.pdf}\n \\caption{Approximate upper bounds for the different Rayleigh quotients, for a perturbation of size $\\varepsilon = 0.001$.}\n \\label{fig:bounds}\n\\end{figure}\nAs already suggested, the reciprocal position of the bounds is problem dependent, and also depends on the magnitude of the eigenvalues we are estimating. There are situations where the upper bounds of the harmonic and the standard Rayleigh quotient interchange, and one becomes less tight than the other. Interestingly, the bound of the homogeneous Rayleigh quotient is always smaller than one of the other two bounds. In the normally distributed spectrum, the homogeneous Rayleigh quotient is often the best approximation. \n\n\\section{The generalized eigenvalue problem} \\label{sec:gep}\nFinally, we introduce the homogeneous Rayleigh quotient for the generalized eigenvalue problem (GEP) $A\\mathbf x = \\lambda B\\,\\mathbf x$, where $A$ is symmetric (definite or indefinite), and $B$ is SPD. This problem has $n$ real eigenvalues $\\lambda_1 \\le \\cdots \\le \\lambda_n$.\nAlthough a common Rayleigh quotient is $\\frac{\\mathbf u^T\\!A\\mathbf u}{\\mathbf u^T\\!B\\,\\mathbf u}$ (see, e.g., \\cite[Ch.~15]{Par98}), it turns out that in our context the most relevant quantities are\n\\begin{equation} \\label{rqgep}\n\\theta = \\frac{\\mathbf u^T\\!AB\\,\\mathbf u}{\\mathbf u^T\\!B^2\\mathbf u}, \\qquad\n\\widetilde \\theta = \\frac{\\mathbf u^T\\!A^2\\mathbf u}{\\mathbf u^T\\!AB\\,\\mathbf u},\n\\end{equation}\nfor nonzero $\\mathbf u^T\\!AB\\,\\mathbf u$.\nThe Rayleigh quotient $\\theta$ satisfies $A\\mathbf u-\\theta B\\,\\mathbf u \\perp B\\,\\mathbf u$, and, equivalently, it is the solution to $\\min_{\\gamma} \\|A\\mathbf u-\\gamma B\\,\\mathbf u\\|$.\nThe harmonic Rayleigh quotient $\\widetilde \\theta$ satisfies $A\\mathbf u-\\theta B\\,\\mathbf u \\perp A\\mathbf u$ and solves $\\min_{\\gamma} \\|\\gamma^{-1} A\\mathbf u-B\\,\\mathbf u\\|$.\nThe extension of the homogeneous Rayleigh quotient for the GEP is rather straightforward.\nIt is the solution to\n\\begin{equation} \\label{gen-homo-sec}\n\\min_{(\\alpha_1,\\,\\alpha_2) \\, \\in \\, \\mathbb P} \\ \\|\\alpha_1 B\\,\\mathbf u - \\alpha_2 A\\mathbf u\\| = \\min_{\\alpha_1^2+\\alpha_2^2=1} \\ \\|[B\\,\\mathbf u \\ \\ - \\!\\!A\\mathbf u] \\smtxa{c}{\\alpha_1 \\\\ \\alpha_2}\\|.\n\\end{equation}\nAs for the standard eigenvalue problem, this amounts to solve a reduced SVD of an $n \\times 2$ matrix, or an eigenvalue problem involving a $2 \\times 2$ matrix.\n\nThe next result is a generalization of Proposition~\\ref{prop:prop}.\nLet $C = [B\\,\\mathbf u \\ \\ -\\!\\!A\\mathbf u]$, with the associated $2 \\times 2$ cross-product matrix\n\\begin{equation*}\nC^TC = \\mtxa{cc}{\\phantom-\\mathbf u^T\\!B^2\\mathbf u & -\\mathbf u^T\\!AB\\,\\mathbf u \\\\ -\\mathbf u^T\\!AB\\,\\mathbf u & \\phantom-\\mathbf u^T\\!A^2\\mathbf u}.\n\\end{equation*}\n\n\\begin{proposition}\nSuppose that $\\mathbf u^T\\!AB\\,\\mathbf u \\ne 0$ and denote by $\\mu$ the smallest eigenvalue of $C^T\\!C$. Then the following properties hold.\n\\begin{itemize}\n\\item[(i)] $\\mu$ is a simple eigenvalue of $C^T\\!C$, or equivalently $\\sqrt{\\mu}$ is a simple singular value of $C$; its corresponding eigenvector $\\smtxa{c}{\\alpha_1 \\\\ \\alpha_2}$ is the unique minimizer of \\eqref{gen-homo-sec}, up to orthogonal transformations. \n\\item[(ii)] The value of $\\mu$ is\n\\[\n\\mu = \\tfrac12 \\, \\big[\\mathbf u^T\\!A^2\\mathbf u+\\mathbf u^T\\!B^2\\mathbf u - \\sqrt{\\smash[b]{(\\mathbf u^T\\!A^2\\mathbf u-\\mathbf u^T\\!B^2\\mathbf u)^2+4\\,(\\mathbf u^T\\!AB\\,\\mathbf u)^2}} \\ \\big].\n\\]\n\\item[(iii)] For the generalized homogeneous Rayleigh quotient $\\alpha$ we have\n\\begin{align}\n \\alpha &= \\frac{\\alpha_1}{\\alpha_2} = \\frac{\\mathbf u^T\\!AB\\,\\mathbf u}{\\mathbf u^T\\!B^2\\mathbf u - \\mu} = \\frac{\\mathbf u^T\\!A^2\\,\\mathbf u - \\mu}{\\mathbf u^T\\!AB\\,\\mathbf u}\\nonumber\\\\\n&= \\frac1{2\\ \\mathbf u^T\\!AB\\,\\mathbf u}\\big[ \\,\\mathbf u^T\\!A^2\\,\\mathbf u - \\mathbf u^T\\!B^2\\mathbf u + \\sqrt{(\\mathbf u^T\\!A^2\\,\\mathbf u - \\mathbf u^T\\!B^2\\mathbf u)^2 + 4\\,(\\mathbf u^T\\!AB\\,\\mathbf u)^2} \\,\\big].\n\\end{align}\nIn terms of the generalized standard and harmonic Rayleigh quotients \\eqref{rqgep},\n\\[\n\\alpha = \\tfrac12 \\, \\big[ \\, \\widetilde \\theta-\\theta^{-1} + \\sqrt{\\smash[b]{(\\widetilde \\theta-\\theta^{-1})^2 + 4}} \\, \\big]\n= \\big( \\tfrac12 \\, \\big[ \\, \\theta^{-1}-\\widetilde \\theta + \\sqrt{\\smash[b]{(\\theta^{-1}-\\widetilde \\theta)^2 + 4}} \\, \\big] \\big)^{-1}.\n\\]\n\\item[(iv)] $\\mu=0$ if and only if $\\mathbf u$ is an eigenvector. If $\\mathbf u$ is an eigenvector, then the corresponding homogeneous Rayleigh quotient $\\alpha$ is the corresponding eigenvalue.\n\\item[(v)] $0 \\le \\mu < \\min(\\mathbf u^T\\!A^2\\mathbf u, \\ \\mathbf u^T\\!B^2\\mathbf u)$.\n\\item[(vi)] We have the following inequalities\n\\begin{equation*}\n \\begin{cases}\n \\theta \\le \\alpha \\le \\widetilde \\theta &\\text{if}\\quad\\mathbf u^T\\!AB\\,\\mathbf u > 0, \\\\\n \\widetilde \\theta \\le \\alpha \\le \\theta &\\text{if}\\quad\\mathbf u^T\\!AB\\,\\mathbf u < 0.\n \\end{cases}\n\\end{equation*}\n\\end{itemize}\n\\end{proposition}\n\\begin{proof}\nThis proof follows the exact same lines of those of Proposition~\\ref{prop:prop}.\n\\end{proof}\nAs in Section~\\ref{sec:homo}, we can show that the generalized homogeneous Rayleigh quotient is a solution to a quadratic equation.\n\\begin{proposition}\\label{prop:genquadeq}\nLet $\\mathbf u^T\\!AB\\,\\mathbf u \\ne 0$. The generalized homogeneous Rayleigh quotient \\eqref{eq:homorq} is the solution to \n\\begin{equation}\n\\label{eq:genhomopoly}\n (\\mathbf u^T\\!AB\\,\\mathbf u) \\, \\alpha^2 + (\\mathbf u^T\\!B^2\\mathbf u-\\mathbf u^T\\!A^2\\mathbf u) \\, \\alpha - (\\mathbf u^T\\!AB\\,\\mathbf u) = 0,\n\\end{equation}\nwhich satisfies $\\alpha\\,\\mathbf u^T\\!AB\\,\\mathbf u > 0$.\n\\end{proposition}\n\\begin{proof}\nThe proof is similar to that of Proposition~\\ref{prop:quadeq}. \n\\end{proof}\n\nWe derive a Galerkin condition from Proposition~\\ref{prop:genquadeq} as follows. Since $\\mathbf u^T\\!AB\\,\\mathbf u = \\mathbf u^T\\!B\\,A\\mathbf u$, \\eqref{eq:genhomopoly} is equivalent to\n\\begin{align*}\n \n \n \\mathbf u^T\\!(\\alpha A + B)(A - \\alpha B)\\,\\mathbf u = 0.\n\\end{align*}\nTherefore, we can write the corresponding Galerkin condition in homogeneous coordinates as\n\\begin{equation*}\n (\\alpha_2A - \\alpha_1B)\\,\\mathbf u \\perp (\\alpha_1A + \\alpha_2B)\\,\\mathbf u.\n\\end{equation*}\nWe will not treat the generalized homogeneous Rayleigh quotient in our experiments. Instead, we discuss and illustrate the use of the homogeneous Rayleigh quotient as stepsize in gradient methods.\n\n\\section{A homogeneous stepsize for gradient methods} \\label{sec:step}\nIn gradient methods for nonlinear optimization, stepsizes that are inverse Rayleigh quotients are popular choices. We refer to \\cite{daniela2018steplength} for a nice recent review about stepsizes.\nIn this section we exploit the inverse of the homogeneous Rayleigh quotient as a new \\emph{homogeneous stepsize} for gradient-type methods.\nConsider the unconstrained minimization of a smooth function $f: \\mathbb R^n \\to \\mathbb R$,\n\\begin{equation*}\n \\min_{\\mathbf x \\in \\mathbb{R}^n} \\, f(\\mathbf x).\n\\end{equation*}\nGradient methods are of the form\n\\begin{equation*}\n\\mathbf x_{k+1} = \\mathbf x_k - \\beta_k \\, \\mathbf g_k = \\mathbf x_k - \\alpha_k^{-1} \\, \\mathbf g_k,\n\\end{equation*}\nwhere $\\mathbf g_k = \\nabla f(\\mathbf x_k)$, $\\beta_k > 0$ is the stepsize, and $\\alpha_k$ the inverse stepsize.\nAs usual we write $\\mathbf s_{k-1} = \\mathbf x_k-\\mathbf x_{k-1}$ and $\\mathbf y_{k-1} = \\mathbf g_k-\\mathbf g_{k-1}$. \n\nIn \\cite{bb1988}, the Barzilai--Borwein stepsizes\n\\begin{equation} \\label{bb1}\n\\beta_k^{\\rm{BB1}} = \\frac{\\mathbf s_{k-1}^T \\, \\mathbf s_{k-1}}{\\mathbf s_{k-1}^T \\, \\mathbf y_{k-1}}, \\qquad\n\\beta_k^{\\rm{BB2}} = \\frac{\\mathbf s_{k-1}^T \\, \\mathbf y_{k-1}}{\\mathbf y_{k-1}^T \\, \\mathbf y_{k-1}}\n\\end{equation}\nhave been introduced. The motivation is that, when we approximate the Hessian in $\\mathbf x_{k-1}$ by a scalar multiple of the identity, i.e., $\\nabla^2 f(\\mathbf x_{k-1}) \\approx \\gamma\\, I$, the corresponding inverse stepsizes satisfy the following least squares secant conditions:\n\\begin{equation} \\label{bbsec}\n\\alpha_k^{\\rm{BB1}} = \\argmin_{\\gamma} \\|\\mathbf y_{k-1} - \\gamma\\,\\mathbf s_{k-1}\\|, \\qquad\n\\alpha_k^{\\rm{BB2}} = \\argmin_{\\gamma} \\|\\gamma^{-1} \\, \\mathbf y_{k-1}-\\mathbf s_{k-1}\\|,\n\\end{equation}\nwith $\\alpha_k^{\\rm{BB1}} = (\\beta_k^{\\rm{BB1}})^{-1}$ and $\\alpha_k^{\\rm{BB2}} = (\\beta_k^{\\rm{BB2}})^{-1}$. Moreover, both stepsizes \\eqref{bb1} can be seen as inverse Rayleigh quotients of a certain matrix $H_k$ at $\\mathbf s_{k-1}$. In fact, for any $f$ it holds\n\\begin{equation*}\n \\mathbf y_{k-1} = \\nabla f(\\mathbf x_k) - \\nabla f(\\mathbf x_{k-1}) = H_k \\, (\\mathbf x_k-\\mathbf x_{k-1}) = H_k \\, \\mathbf s_{k-1},\n\\end{equation*}\nwhere $H_k := \\int_0^1 \\nabla^2 f((1-t) \\, \\mathbf x_{k-1}+ t \\, \\mathbf x_k) \\, dt$ is an average Hessian on the line piece between $\\mathbf x_{k-1}$ and $\\mathbf x_k$. \nFrom this relation, it is easy to see that the minimum residual conditions \\eqref{opt1b} and \\eqref{opt2b} are equivalent to the secant conditions \\eqref{bbsec} for $\\mathbf u = \\mathbf s_{k-1}$ and $A = H_k$.\n\nWe introduce the homogeneous BB stepsize (HBB) as the inverse of the homogeneous Rayleigh quotient of $H_k$ in $\\mathbf s_{k-1}$. The HBB stepsize is given by the quotient $\\beta_k = \\alpha_{2,k} \/ \\alpha_{1,k}$, where the pair $(\\alpha_{1,k}$, $\\alpha_{2,k})$ solves\n\\[\n\\argmin_{\\alpha_1^2+\\alpha^2_2 = 1} \\ \\|\\alpha_1 \\, \\mathbf s_{k-1}-\\alpha_2 \\, \\mathbf y_{k-1}\\|.\n\\]\nAs in \\eqref{bbsec}, this is equivalent to the minimum residual condition \\eqref{homo-sec} for $\\mathbf u = \\mathbf s_{k-1}$ and $A = H_k$. We are interested in the smallest eigenpair $\\smtxa{cc}{\\alpha_{1,k} \\\\ \\alpha_{2,k}}$ of\n\\begin{equation*}\n\\mtxa{cc}{\\phantom-\\mathbf s_{k-1}^T\\mathbf s_{k-1} & -\\mathbf s_{k-1}^T\\mathbf y_{k-1} \\\\[1mm] -\\mathbf s_{k-1}^T\\mathbf y_{k-1} & \\phantom-\\mathbf y_{k-1}^T\\mathbf y_{k-1}}\n= (\\mathbf s_{k-1}^T\\mathbf y_{k-1}) \\cdot \\mtxa{cc}{\\phantom-\\beta_k^{\\rm{BB1}} & -1 \\\\[0.5mm] -1 & \\phantom-\\alpha_k^{\\rm{BB2}}},\n\\end{equation*}\n(where the second equality holds for nonzero $\\mathbf s_{k-1}^T\\mathbf y_{k-1}$)\nand then select $\\beta_k = \\alpha_k^{-1} = \\alpha_{2,k} \/ \\alpha_{1,k}$. This gives (cf. Proposition~\\ref{prop:prop})\n\\begin{equation}\n\\label{eq:homobb}\n\\beta_k = \\frac1{2\\ \\mathbf s_{k-1}^T\\mathbf y_{k-1}} \\, \\big[ \\, \\mathbf s_{k-1}^T\\mathbf s_{k-1}-\\mathbf y_{k-1}^T\\mathbf y_{k-1} + \\sqrt{\\smash[b]{(\\mathbf s_{k-1}^T\\mathbf s_{k-1}-\\mathbf y_{k-1}^T\\mathbf y_{k-1})^2 + 4\\,(\\mathbf s_{k-1}^T\\mathbf y_{k-1})^2}} \\, \\big],\n\\end{equation}\nor, in terms of the BB steps:\n\\[\n\\beta_k = \\tfrac12\\,\\big[\\beta_k^{\\rm{BB1}} - \\alpha_k^{\\rm{BB2}} + \\sqrt{\\smash[b]{(\\beta_k^{\\rm{BB1}} - \\alpha_k^{\\rm{BB2}})^2 + 4}} \\, \\big].\n\\]\nIn Section~\\ref{sec:exp} we will carry out some experiments to test the behavior of the HBB stepsize when plugged into a gradient method for generic differentiable functions. A pseudocode of this method is provided in Algorithm~\\ref{algo:nonlin}. \n\\begin{algorithm}\n\\caption{A homogeneous gradient method for minimization of generic functions}\n\\label{algo:nonlin}\n{\\bf Input}: continuous differentiable function $f$, initial guess $\\mathbf x_0$, initial stepsize $\\beta_0 > 0$, tolerance {\\sf tol}; safeguarding parameters $\\beta_{\\max} > \\beta_{\\min} > 0$; line search parameters $c_{\\rm{ls}}$, $\\sigma_{\\rm{ls}} \\in (0,1)$; memory integer $M>0$ \\\\\n{\\bf Output}: approximation to minimizer $\\argmin_{\\mathbf x} f(\\mathbf x)$ \\\\\n\\begin{tabular}{rl}\n{\\footnotesize 1}: & Set $\\mathbf g_0 = \\nabla f(\\mathbf x_0)$ \\\\\n& {\\bf for} $k = 0, 1, \\dots$ \\\\\n{\\footnotesize 2}: & \\phantom{M} $\\nu_k = \\beta_k$, \\ $f_{\\text{ref}} = \\max \\, \\{ \\, f(\\mathbf x_{k-j}) \\, : \\, 0 \\le j \\le \\min(k,M-1) \\, \\}$ \\\\\n{\\footnotesize 3}: & \\phantom{M} {\\bf while} \\ $f(\\mathbf x_k-\\nu_k \\, \\mathbf g_k) > f_{\\text{ref}} - c_{\\rm{ls}} \\, \\nu_k \\, \\|\\mathbf g_k\\|^2$ \\ {\\bf do} \\ $\\nu_k = \\sigma_{\\rm{ls}} \\, \\nu_k$ \\ {\\bf end} \\\\\n{\\footnotesize 4}: & \\phantom{M} Set \\ $\\mathbf s_k = -\\nu_k \\, \\mathbf g_k$ \\ and update \\ $\\mathbf x_{k+1} = \\mathbf x_k + \\mathbf s_k$ \\\\\n{\\footnotesize 5}: & \\phantom{M} Compute the gradient \\ $\\mathbf g_{k+1} = \\nabla f(\\mathbf x_{k+1})$ \\\\\n{\\footnotesize 6}: & \\phantom{M} {\\bf if} \\ $\\|\\mathbf g_{k+1}\\| \\le {\\sf tol} \\cdot \\|\\mathbf g_0\\|$, \\ {\\bf return}, \\ {\\bf end} \\\\\n{\\footnotesize 7}: & \\phantom{M} $\\mathbf y_k = \\mathbf g_{k+1} - \\mathbf g_k$ \\\\\n{\\footnotesize 8}: & \\phantom{M} Choose $\\beta_{k+1}$ as in \\eqref{eq:homobb}\\\\\n{\\footnotesize 9}: & \\phantom{M} {\\bf if} $\\beta_{k+1} < 0$, take last positive stepsize instead \\\\\n{\\footnotesize 10}: & \\phantom{M} Set $\\beta_{k+1} = \\min(\\max(\\beta_{k+1}, \\, \\beta_{\\min}), \\ \\beta_{\\max})$\\\\\n\\end{tabular}\n\\end{algorithm}\n\nThis algorithm is similar to the one proposed in \\cite{daniela2018steplength,raydan1997barzilai}, with the presence of the homogeneous stepsize \\eqref{eq:homobb} as key difference, and the replacement of a negative stepsize with the last available positive stepsize. The convergence of the method is not affected by these choices, since $\\beta_k$ stays uniformly bounded, i.e., $\\beta_k\\in [\\beta_{\\min}, \\beta_{\\max}]$ for all $k$. Therefore, the proof of global convergence of Algorithm~\\ref{algo:nonlin} can be easily adapted from \\cite[Thm.~2.1]{raydan1997barzilai}. While the convergence is not affected, choosing the homogeneous stepsize as starting steplength in the nonmonotone line search might lead to a smaller number of backtracking steps, compared to classical BB stepsizes.\n\nWe finally remark that, as for the BB stepsizes, no line search is required for HBB steps when the function $f$ is strictly convex quadratic, i.e., $f(\\mathbf x) = \\tfrac12\\, \\mathbf x^TA\\mathbf x - \\mathbf b^T\\mathbf x$, with $A$ SPD (see the results in \\cite{dai2002r,raydan1993barzilai}). In fact, since $H_k \\equiv A$ is positive definite, from Proposition~\\ref{prop:prop} it holds that $\\alpha_k \\in [\\alpha_k^{\\rm{BB1}},\\,\\alpha_k^{\\rm{BB2}}]$. This is sufficient to guarantee the R-linear convergence of the corresponding gradient method (cf. \\cite[Thm.~16]{FHK22} or \\cite[Thm.~4.1]{dai2003alternate}). \n\n\\section{Numerical experiments} \\label{sec:exp}\nIn the first set of experiments, we compare the behavior of different Rayleigh quotients in terms of sensitivity, under small perturbations in the eigenvectors. The second set is devoted to explore the performance of the homogeneous Rayleigh quotient as inverse stepsize in gradient methods. \n\n\\subsection{Comparison of the Rayleigh quotients}\nWe illustrate the behavior of the Rayleigh quotients \\eqref{rq}, \\eqref{hrq} and \\eqref{eq:homorq} and their sensitivity. Without loss of generality, diagonal matrices $A$ can be studied, thus $A = \\text{diag}(\\lambda_1,\\dots,\\lambda_n)$. We consider a family of $100 \\times 100$ positive definite diagonal matrices where the eigenvalues have uniform distribution on the interval $[0,2\\sigma]$, i.e., $\\lambda_i \\sim U(0, 2\\sigma)$, $\\sigma > 0$. The second family contains $100 \\times 100$ indefinite diagonal matrices where the eigenvalues have Gaussian distribution with mean $0$ and variance $\\sigma^2$, which we indicate by $\\lambda_i \\sim \\mathcal{N}(0, \\sigma)$, $\\sigma > 0$. Since the eigenvalues are drawn from continuous probability distributions, there is zero probability that two eigenvalues are equal, so the eigenvectors are well defined with high probability.\n\nAn eigenpair of a generic diagonal $A$ is $(\\lambda, \\mathbf x)$, where $\\mathbf x$ is one of the vectors of the canonical basis of $\\mathbb{R}^{100}$.\nTo compute a random perturbation $\\mathbf u = \\mathbf x + \\varepsilon\\,\\mathbf e$, such that $\\|\\mathbf e\\| = 1$ and $\\mathbf x \\perp \\mathbf e$, we start from a vector with random Gaussian components, project it onto the orthogonal complement of $\\mathbf x$ and normalize to get $\\mathbf e$. Finally we normalize $\\mathbf u$.\n\nFirst, we study the behavior of the different Rayleigh quotients when perturbed, with $\\sigma \\in \\{0.5, 1, 5, 10\\}$ and $\\varepsilon = 0.001$. For each eigenvalue $\\mathbf x$ we draw $100$ random perturbations $\\mathbf u_{j}$, and take the maximum relative error. The desired quantity, e.g., for the Rayleigh quotient, is\n\\[\n\\max_j \\frac{\\vert\\theta(\\mathbf u_{j})-\\lambda\\vert}{\\vert\\lambda\\vert}.\n\\]\nFigure~\\ref{fig:sigmas} shows the maximum relative error as a function of the estimated eigenvalue $\\lambda$. The lower the relative error, the better Rayleigh quotients approximate the corresponding eigenvalue.\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=\\textwidth]{Figure4.pdf}\n \\caption{Sensitivity of Rayleigh quotients (RQ) for different spectra. The label of each graph indicates the distribution of the $100$ eigenvalues. Each eigenvalue is plotted against its corresponding maximum relative error, in logarithmic scale.}\n \\label{fig:sigmas}\n\\end{figure}\nIn these examples, there are peaks close to zero since we consider relative errors; therefore, the Rayleigh quotients are more sensitive when small eigenvalues are approximated. In particular, the harmonic Rayleigh quotient is always more sensitive than the other two for small eigenvalues. By looking at the two families separately, Gaussian and uniformly distributed, we notice that as $\\sigma$ increases, the curves of the Rayleigh quotient and the harmonic Rayleigh quotient remain almost unchanged.\nIn the uniformly distributed case, when $\\sigma = 0.5$, the homogeneous Rayleigh quotient closely follows the Rayleigh quotient in the first half of the spectrum and slightly departs in the second half. Starting from $\\sigma = 1$, the curve gets closer and closer to the harmonic Rayleigh quotient until the two curves are very similar, except for the smaller eigenvalues, where the relative error of the homogeneous Rayleigh quotient is lower than the one of the harmonic Rayleigh quotient. We observe the same behavior in the Gaussian family, with the interesting addition that when $\\sigma = 1$, the homogeneous Rayleigh quotient is even less sensitive than the harmonic Rayleigh quotient. \n\nWe illustrate the behavior of the Rayleigh quotients when $\\varepsilon \\in \\{0.5,0.1,\\,0.01,\\,0.001\\}$, to investigate the influence of different perturbations. We report the results for $\\lambda_i\\sim \\mathcal{N}(0,1)$ in Figure~\\ref{fig:normdelta}. \n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=\\textwidth]{Figure5.pdf}\n \\caption{Sensitivity of Rayleigh quotients for different $\\varepsilon$, when the eigenvalues are drawn from $\\mathcal{N}(0,1)$. Each eigenvalue is plotted against its corresponding maximum relative error, in logarithmic scale.}\n \\label{fig:normdelta}\n\\end{figure}\nAs expected, the relative error increases as $\\epsilon$ increases, but the relative positions of the Rayleigh quotients remain the same. The relative error of the harmonic Rayleigh quotient deteriorates more than the others when $\\varepsilon = 0.5$, but in this case the approximate eigenvector is not of good quality.\n\nFinally, we look at the first-order bounds of Section~\\ref{sec:sens}, when $\\lambda_i \\sim \\mathcal{N}(0,1)$ and $\\lambda_i \\sim U(0,2)$. Instead of just showing the maximum relative error, we draw $10$ random perturbations for each eigenvector, and plot the corresponding relative errors. The results are presented in Figure~\\ref{fig:norm-and-unif}. \n\\begin{figure}[ht] \n\\centering\n \\subfloat[Eigenvalues drawn from $\\mathcal{N}(0,1)$]{\n \\includegraphics[width = 0.85\\textwidth]{Figure6.pdf}}\n \n \\subfloat[Eigenvalues drawn from $U(0,2)$]{\n \\includegraphics[width = 0.85\\textwidth]{Figure7.pdf}}\n \\caption{Sensitivity of the Rayleigh quotients for two different spectra. Each eigenvalue is plotted against their $10$ corresponding relative errors, one for each perturbation of size $\\varepsilon = 0.001$. The first-order upper and lower bounds are shown as red lines. Points that fall outside these bounds are plotted in red.}\n\\label{fig:norm-and-unif}\n\\end{figure}\nWe observe that the lower bounds are more erratic than the upper bounds. This can be justified as follows. In Section~\\ref{sec:homobounds} we have observed that all bounds depend on a continuous nonnegative function $\\vert p_\\lambda(t)\\vert$, which has a zero in $t = \\lambda$ (cf.~\\ref{eq:g}). Therefore, given a $\\lambda_j$, the minimizer $\\argmin_{\\lambda_i \\ne \\lambda} \\vert p_\\lambda(\\lambda_i)\\vert$ is typically either $\\lambda_{j-1}$ or $\\lambda_{j+1}$; the maximizer is generally either $\\lambda_1$ or $\\lambda_n$ (cf.~\\eqref{eq:maxmax}). For this reason, the lower bound shows more variability. While this statement holds for the Rayleigh quotient, this is not always true for the harmonic and the homogeneous Rayleigh quotients; see Figure~\\ref{fig:polys} for an example where the upper bound is independent of the extreme eigenvalues. \n\nRegarding the behavior of the relative errors, the plots in Figure~\\ref{fig:norm-and-unif} reasonably reflect the ones in Figure~\\ref{fig:sigmas}. For the uniformly distributed eigenvalues, the upper bound seems to be sharp at the extremes of the spectrum, while for the Gaussian family it is sharper only close to the smallest eigenvalues (in magnitude). The upper bound for the Rayleigh quotient is also sharp in the largest eigenvalues. While the lower bound of the uniformly distributed family poorly reflects the behavior of the relative error, in the Gaussian family it is tighter. In particular, it is very accurate for the homogeneous Rayleigh quotient. In Figure~\\ref{fig:sigmas}, we have already remarked that the maximum relative error in the homogeneous Rayleigh quotient is lower than for the other Rayleigh quotients. Now we see that for some perturbations, the sensitivity can be even lower.\n\n\\subsection{The homogeneous Rayleigh quotient as stepsize}\nNext, we test the use of the homogeneous stepsize HBB and an adaptive variant in Algorithm~\\ref{algo:nonlin}, onto a set of unconstrained optimization problems. We take some generic differentiable functions from the collection in \\cite{andrei2008unconstrained,more1981testing,raydan1997barzilai} and the suggested starting points therein, as listed in Table~\\ref{tab:unconstrained}.\n\\begin{table}[ht]\n\\centering\n\\caption{Unconstrained optimization test problems.}\n{\\footnotesize\n\\begin{tabular}{lclc} \\hline \nName & Reference & Name & Reference \\\\ \\hline \\rule{0pt}{2.3ex}\n{\\sf Broyden tridiagonal} & \\cite{more1981testing}& {\\sf generalized White and Holst} & \\cite{andrei2008unconstrained} \\\\\n{\\sf extended Powell} & \\cite{more1981testing} & {\\sf Hager} &\\cite{andrei2008unconstrained}\\\\\n{\\sf extended Rosenbrock} & \\cite{more1981testing}& {\\sf strictly convex 2} & \\cite{raydan1997barzilai}\\\\\n{\\sf generalized Rosenbrock} & \\cite{andrei2008unconstrained} & {\\sf trigonometric} & \\cite{more1981testing}\\\\\n{\\sf generalized tridiagonal 1} &\\cite{andrei2008unconstrained} & &\\\\\n\\hline\n\\end{tabular}}\n\\label{tab:unconstrained}\n\\end{table}\n\nWe remark that the problems that come from the collection \\cite{more1981testing} are nonlinear least squares problems\n\\begin{equation*}\n \\min_{\\mathbf x\\in\\mathbb{R}^n} \\ \\tfrac{1}{2}\\,\\sum_{i=1}^m f_i^2(\\mathbf x),\n\\end{equation*}\nwhere $m$ is a function of $n$. For all the test functions, we pick $n\\in \\{10^2, 10^3, 10^4\\}$. The {\\sf generalized Rosenbrock}, {\\sf generalized White and Holst} and {\\sf extended Powell} objective functions have been scaled by the Euclidean norm of the first gradient. \n\nAs for the parameters of Algorithm~\\ref{algo:nonlin}, we set the choices $\\beta_{\\min} = 10^{-30}$, $\\beta_{\\max} = 10^{30}$, $c_{\\rm{ls}} = 10^{-4}$, $\\sigma_{\\rm{ls}} = \\frac12$, $M = 10$ and $\\beta_0 = 1$.\nThe algorithm stops when $\\|\\mathbf g_k\\| \\le {\\sf tol} \\cdot \\|\\mathbf g_0\\|$, with ${\\sf tol} = 10^{-8}$, or when $5\\cdot 10^4$ iterations are reached. All different steps in Table~\\ref{tab:exp} are tested. Along with the homogeneous stepsize (HBB), BB1 and BB2, we also consider the adaptive ABB method \\cite{zhou2006abb}:\n\\[\n\\beta_k^{\\rm ABB} = \\left\\{ \n\\begin{array}{ll}\n\\beta_k^{\\rm BB2} = \\frac{\\mathbf s_{k-1}^T\\mathbf y_{k-1}}{\\mathbf y_{k-1}^T\\mathbf y_{k-1}} & \\quad {\\rm if}\\; \\cos^2(\\mathbf s_{k-1}, \\, \\mathbf y_{k-1}) < \\eta, \\\\[4mm]\n\\beta_k^{\\rm BB1} = \\frac{\\mathbf s_{k-1}^T\\mathbf s_{k-1}}{\\mathbf s_{k-1}^T\\mathbf y_{k-1}} & \\quad \\text{otherwise}.\n\\end{array}\n\\right.\n\\]\nA default value is $\\eta = 0.8$; cf., e.g., \\cite{daniela2018steplength}. Inspired by this, we also propose a straightforward generalization of our HBB step: the adaptive HBB method (AHBB), which takes the stepsize\n\\begin{equation} \\label{ahbb}\n\\beta_k^{\\rm AHBB} = \\left\\{ \n\\begin{array}{ll}\n\\beta_k^{\\rm HBB} = \\alpha_k^{-1} & \\quad {\\rm if}\\; \\cos^2(\\mathbf s_{k-1}, \\, \\mathbf y_{k-1}) < \\eta, \\\\[1mm]\n\\beta_k^{\\rm BB1} = \\frac{\\mathbf s_{k-1}^T\\mathbf s_{k-1}}{\\mathbf s_{k-1}^T\\mathbf y_{k-1}} & \\quad \\text{otherwise}.\n\\end{array}\n\\right.\n\\end{equation}\n\n\\begin{table}[ht]\n\\centering\n\\caption{Tested stepsizes.}\n\\label{tab:exp}\n {\\footnotesize \n\\begin{tabular}{llc} \n\\hline \n\\rule{0pt}{2.1ex}\n{\\bf Method} & {\\bf Stepsize} & {\\bf Reference} \\\\ \\hline \\rule{0pt}{3.8ex}\nBB1 & $\\frac{\\mathbf s_{k-1}^T\\mathbf s_{k-1}}{\\mathbf s_{k-1}^T\\mathbf y_{k-1}}$ \\quad\n(cf.~$\\theta^{-1}$ in \\eqref{rq}) & \\cite{bb1988} \\\\[3mm]\nBB2 & $\\frac{\\mathbf s_{k-1}^T\\mathbf y_{k-1}}{\\mathbf y_{k-1}^T\\mathbf y_{k-1}}$ \\quad (cf.~$\\widetilde \\theta^{-1}$ in \\eqref{hrq}) & \\cite{bb1988} \\\\[3mm]\nABB & {\\bf if} $\\cos^2(\\mathbf s_{k-1},\\,\\mathbf y_{k-1}) < 0.8$, BB2 {\\bf else} BB1 & \\cite{zhou2006abb} \\\\[2mm]\nHBB & $\\alpha^{-1} = \\frac{\\alpha_2}{\\alpha_1}$, smallest singular vector of $[\\mathbf s_{k-1} \\ \\ -\\!\\mathbf y_{k-1}]$ & \\eqref{eq:homobb} \\\\[2mm]\nAHBB & {\\bf if} $\\cos^2(\\mathbf s_{k-1},\\,\\mathbf y_{k-1}) < 0.8$, HBB {\\bf else} BB1 & \\eqref{ahbb} \\\\ \\hline\n\\end{tabular}\n}\n\\end{table}\nNote that the homogeneous stepsize \\eqref{eq:homobb} costs as much as the BB stepsizes, since it only involves the computations of the inner products $\\mathbf s_{k-1}^T\\mathbf y_{k-1}$, $\\mathbf s_{k-1}^T\\mathbf s_{k-1}$, and $\\mathbf y_{k-1}^T\\mathbf y_{k-1}$. This coincides with the cost of determining the smallest singular vector of an $n \\times 2$ matrix, i.e., $\\mathcal O(n)$, just as for the standard and harmonic Rayleigh quotient.\n\nTable~\\ref{tab:homounconstr} reports the number of function evaluations and the number of iterations for each problem and stepsize. An entry is omitted when the algorithm did not converge within the maximum number of iterations.\n\\begin{table}[ht]\n\\centering\n\\caption{Number of function evaluations and iterations for each stepsize and problem. Boldface items are minimal.} \n\\label{tab:homounconstr}\n\\scalebox{.85}{\n\\begin{tabular}{l|rrrrr|rrrrr}\n \\hline\n \\multicolumn{1}{l}{Problem and size}& \\multicolumn{5}{c}{NFE}& \\multicolumn{5}{c}{Iterations}\\\\\n & BB1 & BB2 & ABB & HBB & AHBB & BB1 & BB2 & ABB & HBB & AHBB \\\\ \\hline \\rule{0pt}{2.3ex}\n Broyden tridiag ($10^2$) & 110 & \\bf 84 & 87 & \\bf 84 & 90 & 101 & \\bf 79 & 83 & 80 & 85 \\\\ \n Broyden tridiag ($10^3$) & \\bf 84 & 105 & 97 & 126 & 93 & \\bf 76 & 100 & 91 & 117 & 86 \\\\ \n Broyden tridiag ($10^4$) & 90 & \\bf 87 & 89 & 111 & 114 & \\bf 82 & 83 & 85 & 104 & 106 \\\\ \n Ext~Powell ($10^2$) & 404 & \\bf 124 & 299 & 492 & 345 & 253 & \\bf 112 & 266 & 313 & 234 \\\\ \n Ext~Powell ($10^3$) & 226 & \\bf 162 & 294 & 303 & 246 & 163 & \\bf 130 & 266 & 197 & 164 \\\\ \n Ext~Powell ($10^4$) & 506 & \\bf 186 & 267 & 398 & 422 & 308 & \\bf 163 & 231 & 238 & 258 \\\\ \n Ext~Rosenbrock ($10^2$) & \\bf 103 & 751 & 763 & 756 & 762 & \\bf 59 & 741 & 750 & 736 & 746 \\\\ \n Ext~Rosenbrock ($10^3$) & \\bf 103 & 751 & 763 & 749 & 762 & \\bf 59 & 741 & 750 & 731 & 748 \\\\ \n Ext~Rosenbrock ($10^4$) & \\bf 103 & 751 & 763 & 750 & 768 & \\bf 59 & 741 & 750 & 732 & 746 \\\\ \n Gen~Rosenbrock ($10^2$) & 5855 & & & 5528 & \\bf 4583 & 3734 & & & 3562 & \\bf 2936 \\\\ \n Gen~Rosenbrock ($10^3$) & \\sca{38536} & & & \\sca{\\bf 38532} & \\sca{38915} & \\sca{\\bf 24491} & & & \\sca{24509} & \\sca{24599} \\\\ \n Gen~tridiag 1 ($10^2$) & \\bf 30 & 34 & 31 & 34 & 32 & \\bf 28 & 32 & 29 & 32 & 30 \\\\ \n Gen~tridiag 1 ($10^3$) & \\bf 30 & 31 & 31 & 31 & 31 & \\bf 28 & 29 & 29 & 29 & 29 \\\\ \n Gen~tridiag 1 ($10^4$) & 30 & 28 & \\bf 26 & 28 & \\bf 26 & 27 & 25 & \\bf 23 & 25 & \\bf 23 \\\\ \n Gen~W \\& H ($10^2$) & \\sca{14647} & & & \\sca{13488} & \\sca{\\bf 13186} & 9138 & & & 8449 & \\bf 8222 \\\\ \n Hager ($10^2$) & 33 & 35 & \\bf 30 & 34 & \\bf 30 & 30 & 32 & \\bf 27 & 31 & \\bf 27 \\\\ \n Hager ($10^3$) & 53 & 66 & \\bf 52 & 54 & 53 & 49 & 62 & \\bf 48 & 50 & 49 \\\\ \n Hager ($10^4$) & 124 & 98 & 89 & \\bf 84 & 102 & 112 & 90 & 81 & \\bf 76 & 94 \\\\ \n Strict-Convex 2 ($10^2$) & 118 & 94 & 90 & 103 & \\bf 86 & 102 & 88 & 86 & 89 & \\bf 79 \\\\ \n Strict-Convex 2 ($10^3$) & 569 & 316 & \\bf 236 & 237 & 311 & 384 & 302 & \\bf 215 & \\bf 215 & 273 \\\\ \n Strict-Convex 2 ($10^4$) & 2577 & 626 & 569 & 567 & \\bf 535 & 1556 & 601 & 529 & \\bf 524 & 495 \\\\ \n Trigonometric ($10^2$) & 124 & 175 & 204 & 152 & \\bf 119 & 112 & 173 & 202 & 134 & \\bf 106 \\\\ \n Trigonometric ($10^3$) & 139 & 233 & 214 & 137 & \\bf 133 & 123 & 227 & 209 & 119 & \\bf 118 \\\\ \n \\hline\n\\end{tabular}}\n\\end{table}\n\nThe homogeneous stepsize HBB does not seem to be more effective than the other stepsizes, either in reducing the number of function evaluations or the number of iterations. However, the adaptive homogeneous stepsize AHBB seems to be able to reduce the number of function evaluations for some problems. The most significant improvement is in the {\\sf generalized Rosenbrock} function, when $n = 100$.\n\n\\section{Conclusions} \\label{sec:con}\nIn this paper we have introduced a new type of Rayleigh quotient -- the homogeneous Rayleigh quotient $\\alpha$ -- which minimizes the homogeneous residual quantity \\eqref{homo-sec}. We have shown that the value of $\\alpha$ lies between the standard and the harmonic Rayleigh quotient, and so does its sensitivity, when we want to approximate the extreme eigenvalues of an SPD matrix $A$. Regarding the other eigenvalues and the indefinite case, we have provided examples where the homogeneous Rayleigh quotient is more accurate than the others.\nIn addition, in all our experiments, we gave noticed that the homogeneous Rayleigh quotient is less sensitive than one of the other two Rayleigh quotients. To some extent, the homogeneous Rayleigh quotient seems to leverage the two Rayleigh quotients: it is less sensitive than the harmonic Rayleigh quotient when estimating the smallest eigenvalues, and less sensitive than the standard Rayleigh quotient when estimating the largest eigenvalues (in magnitude). \n\nGiven that the standard and the harmonic Rayleigh quotient arise from a Galerkin condition, we have derived a Galerkin condition for the homogeneous Rayleigh quotient. All the results extend to the homogeneous Rayleigh quotient for the generalized eigenvalue problem $(A,B)$, for SPD $B$ and symmetric $A$.\n\nFinally, we have considered the homogeneous Rayleigh quotient as inverse stepsize (HBB) in gradient methods for unconstrained optimization problems. We have also proposed the AHBB steplength as an alternative to the ABB stepsize \\cite{zhou2006abb}, based on the homogeneous stepsize. Experiments show that this variant sometimes performs better than the classical BB steplengths.\n\n\\bibliographystyle{spmpsci} ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe subject of order statistics deals with the properties and distributions of the ordered random variables (RVs) and their functions. It has found applications in many areas of statistical theory and practice~\\cite{kn:Order_Statistics}, with examples in life-testing, quality control, radar, as well as signal and image processing~\\cite{kn:Handbook_of_Statistics,kn:Goldsmith_book,kn:Multi_PDF_Order_Selection,kn:Steven_Kay,kn:JAHwang,kn:RCHardie,kn:CKotropoulos}.\nOrder statistics has made over the last decade an increasing number of appearances in the design and analysis of wireless communication systems, specifically for the performance analysis of advanced diversity techniques, adaptive transmission techniques, and multiuser scheduling techniques (see for example~\\cite{kn:simon_book, kn:MS_GSC, kn:OT_MRC_j, kn:Choi_finger, kn:Hong_Sumrate, kn:alouini_wi_j3, kn:ma00, kn:win2001, kn:annamalai2002, kn:Mallik_05, kn:bouida2008, kn:Alouini_Order_Statistics, kn:Chambers, kn:Smith}). \nIn these performance analysis exercises, the joint statistics of partial sums of ordered RVs are often necessary for the accurate characterization of system performance~\\cite{kn:Choi_finger, kn:bouida2008, kn:capture_outage_GSC}.\nNote that even if the original unordered RVs are independently distributed, their ordered versions are dependent due to the inequality relations among them, which makes it challenging to such joint statistics. Recently, a successive conditioning approach was used to convert dependent ordered random variables into independent unordered ones \\cite{kn:MS_GSC, kn:OT_MRC_j}. However, this approach requires some case-specific manipulations, which may not always be generalizable.\n\nMotivated by these facts, we introduced in \\cite{kn:unified_approach} a unified analytical framework to determine the joint statistics of partial sums of ordered independent and identically distributed (i.i.d.) RVs by extending the interesting results published in~\\cite{kn:Integral_Solution_Nuttall,kn:joint_PDF_Nuttall,kn:Multi_PDF_Order_Selection}. More specifically, our approach can be applied not only to the cases when all the $N$ ordered RVs are involved but also to the cases when only the $N_s$ $\\left(N_s < N\\right)$ best RVs are considered. \nWith the proposed approach, we can systematically derive the joint statistics of any partial sums of ordered statistics, in terms of the moment generating function (MGF) and the probability density function (PDF).\nThese statistical results can be used for the performance analysis of various wireless communication systems over generalized fading channels \\cite{kn:simon_book}. However, the identical fading assumption on all diversity branches is not always valid in real-life applications.\nThe average fading power may vary from one path to the other because the branches of a diversity system are sometimes unbalanced and the communication system is sometimes operating over frequency-selective channels with a non-uniform power delay profile or channel multipath intensity profile (i.e. the average SNR of the diversity paths are not necessary the same). \n\nWe therefore introduce in this paper an unified analytical framework to determine the joint statistics of partial sums of ordered independent non-identically distributed (i.n.d.) RVs by extending our previous work for i.i.d. fading scenarios \\cite{kn:unified_approach}.\nMore specifically, we use an MGF based systematic analytical approach to investigate the joint statistics of any partial sums of ordered statistics for general i.n.d. fading, in terms of MGF and the PDF. We would like to emphasize that such generalization\nThe main challenge for generalizing the work in \\cite{kn:unified_approach} to i.n.d. general fading cases is that joint PDF of ordered i.n.d. RVs is much more complicated than that of ordered i.i.d. RVs. We need to carry out more detailed manipulation and introduce new mathematical representation to obtain the generic results (e.g. joint MGF and related joint PDF) for i.n.d. general cases in a compact form. \nIn addition, we present the closed-form expressions for the exponential RV special case, which is most widely used in wireless literature. For other type of RVs, our approach will lead to much simpler results than the conventional approach involving multiple-fold integration.\nFurthermore, the exponential distribution is frequently used in the performance evaluation analysis of networks and telecommunication systems. It is also used to model the waiting times between occurrences of rare events, lifetimes of electrical or mechanical devices \\cite{kn:Handbook_Stojmenovic, kn:Goldsmith_book,kn:Handbook_of_Statistics,kn:Introduction_Prob_and_Appli}. Finally, as an application of our analytical framework, we generalize the performance results of GSC-based RAKE receivers in \\cite{kn:capture_outage_GSC} by maintaining the assumption of independence among the diversity paths but relaxing the identically distributed assumption. We also discussed a couple of other sample applications of the generic results presented in this work.\n\n\n\n\\section{Problem Statement and Main Idea}\nOrder statistics deals with the distributions and statistical properties of the new random variables obtained after ordering the realizations of some random variables.\nLet $\\left\\{ \\gamma_{i_l} \\right\\}$, $i_l = 1, 2,\\cdots, N$ denote $N$ i.n.d. nonnegative random variables with PDF $p_{i_l}\\left(\\cdot \\right)$ and CDF $P_{i_l}\\left( \\cdot \\right)$. Let $u_i$ denote the random variable corresponding to the $i$-th largest observation of the $N$ original random variables (also called $i$-th order statistics), such that $u_1 \\ge u_2 \\ge \\cdots \\ge u_N$. The $N$-dimensional joint PDF of the ordered RVs $\\left\\{ {u_{i} } \\right\\}_{i = 1}^N$ is given by~\\cite{kn:Order_Statistics}\n\\begin{equation} \\small\\label{eq:m-joint_PDF_MRC}\ng\\left( {{u_1},{u_2}, \\ldots ,{u_N}} \\right) = \\sum\\limits_{\\scriptstyle {i_1},{i_2}, \\ldots ,{i_N} \\hfill \\atop \n \\scriptstyle {i_1} \\ne {i_2} \\ne \\cdots \\ne {i_N} \\hfill}^{1,2, \\ldots ,N} {{p_{{i_1}}}\\left( {{u_1}} \\right){p_{{i_2}}}\\left( {{u_2}} \\right) \\cdots {p_{{i_N}}}\\left( {{u_N}} \\right)}.\n\\end{equation}\nSimilarly, the $N_s$-dimensional joint PDF of $\\left\\{ {u_{i} } \\right\\}_{i = 1}^{N_s}$ is given by~\\cite{kn:Order_Statistics}\n\\small\\begin{eqnarray} \\label{eq:m-joint_PDF_GSC}\n\\!\\!\\!\\!\\!\\!g\\left( {{u_1},{u_2}, \\cdots ,{u_{{N_s}}}} \\right) \\!\\!&=& \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\sum\\limits_{\\scriptstyle {i_1},{i_2}, \\cdots ,{i_N} \\atop \n \\scriptstyle {i_1} \\ne {i_2} \\ne \\cdots \\ne {i_N}}^{1,2, \\cdots ,N} \\!\\!\\! {{p_{{i_1}}}\\left( {{u_1}} \\right){p_{{i_2}}}\\left( {{u_2}} \\right) \\cdots {p_{{i_{{N_s}}}}}\\left( {{u_{{N_s}}}} \\right)\\prod\\limits_{j = {N_s} + 1}^N {{P_{{i_j}}}\\left( {{u_{{N_s}}}} \\right)} } \\nonumber \\\\\n&{\\rm{or}}& \\nonumber \\\\\n\\!\\!\\!\\!\\!\\!&=& \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\sum\\limits_{\\scriptstyle {i_1},{i_2}, \\cdots ,{i_N} \\atop \n \\scriptstyle {i_1} \\ne {i_2} \\ne \\cdots \\ne {i_{{N_s}}}}^{1,2, \\cdots ,N} \\!\\!\\! {{p_{{i_1}}}\\left( {{u_1}} \\right){p_{{i_2}}}\\left( {{u_2}} \\right) \\cdots {p_{{i_{{N_s}}}}}\\left( {{u_{{N_s}}}} \\right)\\!\\!\\!\\!\\sum\\limits_{\\substack{\n {i_{{N_s} + 1}}, \\cdots ,{i_N} \\\\ \n {i_{{N_s} + 1}} \\ne \\cdots \\ne {i_N} \\\\ \n {i_{{N_s} + 1}} \\ne {i_1},{i_2}, \\cdots ,{i_{{N_s}}} \\\\ \n \\vdots \\\\ \n {i_N} \\ne {i_1},{i_2}, \\cdots ,{i_{{N_s}}} \\\\ \n }}^{1,2, \\cdots ,N} {\\prod\\limits_{\\scriptstyle l = {N_s} + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{{N_s} + 1}}, \\cdots ,{i_N}} \\right\\}}^N \\!\\!\\!\\! {{P_{{i_l}}}\\left( {{u_{{N_s}}}} \\right)} } }.\n\\end{eqnarray}\\normalsize\n\nThe objective is to derive the joint PDF of partial sums involving either all $N$ or the first $N_s$ ($N_s < N$) ordered RVs for the more general case in which the diversity paths are independent but not necessarily identically distributed.\nSimilar to \\cite{kn:unified_approach}, we adopt a general two-step approach:\n\\begin{itemize}\n\\renewcommand{\\labelitemi}{$\\bullet$}\n\\item Step I: Obtain the analytical expressions of the joint MGF of partial sums (not necessarily the partial sums of interest as will be seen later). \n\n\\item Step II: Apply inverse Laplace transform to derive the joint PDF of partial sums (additional integration may be required to obtain the desired joint PDF).\n\\end{itemize}\nIn step I, by interchanging the order of integration, while ensuring each pair of limits is chosen to be as tight as possible, the multiple integral can be rewritten into compact equivalent representations.\nAfter obtaining the joint MGF in a compact form, we can derive joint PDF of selected partial sum through inverse Laplace transform. For most cases of our interest, the joint MGF involves basic functions, for which the inverse Laplace transform can be calculated analytically. In the worst case, we may rely on the Bromwich contour integral. In most of the case, the result involves a single one-dimensional contour integration, which can be easily and accurately evaluated numerically with the help of integral tables~\\cite{kn:Mathematical_handbook, kn:gradshteyn_6} or using standard mathematical packages such as Mathematica and Matlab.\n\nThe above general steps can be directly applied when all $N$ ordered RVs are considered and the RVs in the partial sums are continuous. When either of these conditions do not hold, we need to apply some extra steps in the analysis in order to obtain a valid joint MGF \\cite{kn:unified_approach}. For example, when the RVs involved in one partial sum is not continuous, i.e., separated by the other RVs, we need to divide these RVs into smaller sums. For example in~Fig.~\\ref{Example_2}, we consider 3-dimensional joint PDF of $\\{\\gamma_{1:K}$, $\\gamma_{2:K}$, $\\gamma_{5:K}$, $\\gamma_{6:K}\\}$, $\\{\\gamma_{3:K},\\gamma_{4:K}\\}$, and $\\{\\gamma_{7:K},\\gamma_{8:K}\\}$ for $K>8$. Note that the first group is not continuous.\nAs a result, we will derive 5-dimensional joint MGF in step I, $\\{\\gamma_{1:K},\\gamma_{2:K}\\}$, $\\{\\gamma_{3:K},\\gamma_{4:K}\\}$, $\\{\\gamma_{5:K},\\gamma_{6:K}\\}$, $\\{\\gamma_{7:K}\\}$, $\\{\\gamma_{8:K}\\}$. After the joint PDF of the new substituted partial sums are derived with inverse Laplace transform in step II, we can transform it to a lower dimensional desired joint PDF with finite integration.\n\n\n\\section{Common Functions and Useful Relations}\nIn the following sections, we present several examples to illustrate the proposed analytical framework. Our focus is on how to obtain compact expressions of the joint MGFs for i.n.d. general fading conditions, which can be greatly simplified with the application of the following function and relations.\n\n\\subsection{Common Functions}\n\n\\begin{enumerate}\n\\item[i)]A mixture of a CDF and an MGF ${c_{{i_l}}}\\left( {{\\gamma},\\lambda } \\right)$:\n\\begin{equation}\\small\\label{eq:CDF_MGF}\n{c_{{i_l}}}\\left( {{\\gamma},\\lambda } \\right) = \\int_0^{{\\gamma}} {{p_{{i_1}}}\\left( x \\right)\\exp \\left( {\\lambda x} \\right)dx},\n\\end{equation}\nwhere $p_{{i_1}}\\left( x \\right)$ denotes the PDF of the RV of interest.\nNote that $c_{{i_l}}\\left( {{\\gamma},0 } \\right)=c_{{i_l}}\\left( {{\\gamma}} \\right)$ is the CDF and $c_{{i_l}}\\left( {\\infty,\\lambda } \\right)$ leads to the MGF. \nHere, the variable $\\gamma$ is real, while $\\lambda$ can be complex.\n\n\\item[ii)]A mixture of an exceedance distribution function (EDF) and an MGF, ${e_{{i_l}}}\\left( {{\\gamma},\\lambda } \\right)$:\n\\begin{equation} \\small\\label{eq:EDF_MGF} \n{e_{{i_l}}}\\left( {{\\gamma},\\lambda } \\right) = \\int_{{\\gamma}}^\\infty {{p_{{i_1}}}\\left( x \\right)\\exp \\left( {\\lambda x} \\right)dx}.\n\\end{equation}\n\nNote that ${e_{{i_l}}}\\left( {{\\gamma},0 } \\right)={e_{{i_l}}}\\left( {\\gamma}\\right)$ is the EDF while ${e_{{i_l}}}\\left( {0,\\lambda } \\right)$ gives the MGF.\n\n\n\\item[iii)]An interval MGF ${\\mu _{{i_l}}}\\left( {{\\gamma_a},{\\gamma_b},\\lambda } \\right)$:\n\\begin{equation} \\small\\label{eq:IntervalMGF}\n {\\mu _{{i_l}}}\\left( {{z_a},{z_b},\\lambda } \\right) = \\int_{{z_a}}^{{z_b}} {{p_{{i_1}}}\\left( x \\right)\\exp \\left( {\\lambda x} \\right)dx}. \n\\end{equation}\n\nNote that ${\\mu _{{i_l}}}\\left( {0, \\infty, \\lambda } \\right)$ gives the MGF.\n\\end{enumerate}\nNote that the functions defined in (\\ref{eq:CDF_MGF}), (\\ref{eq:EDF_MGF}) and (\\ref{eq:IntervalMGF}) are related as follows:\n\\small\\begin{eqnarray}\n {\\mu _{{i_l}}}\\left( {{z_a},{z_b},\\lambda } \\right)&=& {c_{{i_l}}}\\left( {{z_b},\\lambda } \\right) - {c_{{i_l}}}\\left( {{z_a},\\lambda } \\right) \\\\ \n &=& {e_{{i_l}}}\\left( {{z_b},\\lambda } \\right) - {e_{{i_l}}}\\left( {{z_a},\\lambda } \\right). \\label{eq:Interrelation_of_Interval_MGF}\n\\end{eqnarray}\\normalsize\n\n\\subsection{Simplifying Relationship}\n\\begin{enumerate}\n\\item[i)] Integral $J_m$:\n\\\\\nBased on the derivation given in Appendix~\\ref{AP:B}, the integral $J_m$ defined as:\n\\small \\begin{eqnarray} \\label{eq:common_fnt_1}\nJ_m &=& \\sum\\limits_{\\substack{ \n {i_m},{i_{m + 1}}, \\ldots ,{i_N} \\\\ \n {i_m} \\ne {i_{m + 1}} \\ne \\cdots \\ne {i_N} \\\\ \n {i_m} \\ne {i_1},{i_2}, \\ldots ,{i_{m - 1}} \\\\ \n {i_{m + 1}} \\ne {i_1},{i_2}, \\ldots ,{i_{m - 1}} \\\\ \n \\vdots \\\\ \n {i_N} \\ne {i_1},{i_2}, \\ldots ,{i_{m - 1}} \n }}^{1,2, \\ldots ,N} {\\int\\limits_0^{{u_{m - 1}}} {d{u_m}{p_{{i_m}}}\\left( {{u_m}} \\right)\\exp \\left( {\\lambda {u_m}} \\right)\\int\\limits_0^{{u_m}} {d{u_{m + 1}}{p_{{i_{m + 1}}}}\\left( {{u_{m + 1}}} \\right)\\exp \\left( {\\lambda {u_{m + 1}}} \\right) } } } \\nonumber \\\\\n&& \\cdots \\int\\limits_0^{u_{N - 1}} d{u_N} p_{i_N}\\left( {u_N} \\right)\\exp \\left( \\lambda {u_N} \\right),\n\\end{eqnarray}\\normalsize\ncan be simply expressed in terms of the function $c_{i_l}\\left( \\gamma,\\lambda \\right)$ as\n\\begin{equation} \\small \\label{eq:CDF_MGF_multiple}\n{J_m} = \\sum\\limits_{\\left\\{ {{i_m},{i_{m + 1}}, \\ldots ,{i_N}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{N - m + 1}}\\left( {{I_N} - \\left\\{ {{i_1},{i_2}, \\ldots ,{i_{m - 1}}} \\right\\}} \\right)} {\\prod\\limits_{\\scriptstyle l = m \\atop \n \\scriptstyle \\left\\{ {{i_m},{i_{m + 1}}, \\ldots ,{i_N}} \\right\\}}^N {{c_{{i_l}}}\\left( {{u_{m - 1}},\\lambda } \\right)} }.\n\\end{equation}\nIn here, the complicated summation notation used in eq. (\\ref{eq:common_fnt_1}) is simplified based on the following power set definition.\nWe define index set $I_N$ as $I_N=\\left\\{1,2,\\cdots,N \\right\\}$. The subset of $I_N$ with $n$ $\\left(n\\le N \\right)$ elements is denoted by $\\mathcal{P}_n\\left(I_N\\right)$. The remaining index can be grouped in the set $I_N - \\mathcal{P}_n\\left(I_N\\right)$. Based on these definitions, a summation in (\\ref{eq:common_fnt_1}) includes all possible subsets of the index set $I_N$ $\\left(I_N=\\left\\{i_1,i_2,\\cdots,i_N \\right\\}\\right)$ excluding the subset $\\left\\{ {{i_1},{i_2}, \\ldots ,{i_{m - 1}}} \\right\\}$ with $N-\\left(m-1\\right)$ elements and these subsets with $N-\\left(m-1\\right)$ elements can be denoted by $\\mathcal{P}_{N-m+1}\\left(I_N-\\left\\{ {{i_1},{i_2}, \\ldots ,{i_{m - 1}}} \\right\\}\\right)$.\n\n\\item[ii)]Integral $J'_m$:\n\\\\\nFollowing the similar derivation as given in Appendix~\\ref{AP:C}, the integral $J'_m$, defined as\n\\small\\begin{eqnarray} \\label{eq:common_fnt_2}\n{{J'}_m} &=& \\sum\\limits_{\\substack{\n {i_1},{i_2}, \\ldots ,{i_m} \\\\ \n {i_1} \\ne {i_2} \\ne \\cdots \\ne {i_m} \\\\ \n {i_1} \\ne {i_{m + 1}},{i_{m + 2}}, \\ldots ,{i_N} \\\\ \n {i_2} \\ne {i_{m + 1}},{i_{m + 2}}, \\ldots ,{i_N} \\\\ \n \\vdots \\\\ \n {i_m} \\ne {i_{m + 1}},{i_{m + 2}}, \\ldots ,{i_N} \\\\ \n }}^{1,2, \\ldots ,N} {\\int\\limits_{{u_{m + 1}}}^\\infty {d{u_m}{p_{{i_m}}}\\left( {{u_m}} \\right)\\exp \\left( {\\lambda {u_m}} \\right)\\int\\limits_{{u_m}}^\\infty {d{u_{m - 1}}{p_{{i_{m - 1}}}}\\left( {{u_{m - 1}}} \\right)\\exp \\left( {\\lambda {u_{m - 1}}} \\right) } } } \\nonumber \\\\\n&&\\cdots \\int\\limits_{{u_2}}^\\infty {d{u_1}{p_{{i_1}}}\\left( {{u_1}} \\right)\\exp \\left( {\\lambda {u_1}} \\right)},\n\\end{eqnarray}\\normalsize\ncan be simply re-written in terms of the function $e_{i_l}\\left( \\gamma,\\lambda \\right)$ with the help of the definition of power set used in III-B-i) as\n\\begin{equation}\\small\\label{eq:EDF_MGF_multiple}\n{{J'}_m} = \\sum\\limits_{\\left\\{ {{i_1},{i_2}, \\ldots ,{i_m}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _m}\\left( {{I_N} - \\left\\{ {{i_{m + 1}},{i_{m + 2}}, \\ldots ,{i_N}} \\right\\}} \\right)} {\\prod\\limits_{\\scriptstyle l = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1},{i_2}, \\ldots ,{i_m}} \\right\\}}^m {{e_{{i_l}}}\\left( {{u_{m + 1}},\\lambda } \\right)} }.\n\\end{equation}\n\n\\item[iii)] Integral $J''_{a,b}$:\n\\\\\nBased on the derivation given in Appendix~\\ref{AP:D}, the integral $J''_{a,b}$, defined as\n\\small\\begin{eqnarray} \\label{eq:common_fnt_3}\n{{J'}_{a,b}} &=& \\sum\\limits_{\\substack{\n {i_{a + 1}}, \\ldots ,{i_{b - 1}} \\\\ \n {i_{a + 1}} \\ne {i_{a + 2}} \\ne \\cdots \\ne {i_{b - 1}} \\\\ \n {i_{a + 1}} \\ne {i_1}, \\cdots ,{i_a},{i_b}, \\ldots ,{i_N} \\\\ \n {i_{a + 2}} \\ne {i_1}, \\cdots ,{i_a},{i_b}, \\ldots ,{i_N} \\\\ \n \\vdots \\\\ \n {i_{b - 1}} \\ne {i_1}, \\cdots ,{i_a},{i_b}, \\ldots ,{i_N} \\\\ \n }}^{1,2, \\ldots ,N} {\\int\\limits_{{u_b}}^{{u_a}} {d{u_{b - 1}}{p_{{i_{b - 1}}}}\\left( {{u_{b - 1}}} \\right)\\exp \\left( {\\lambda {u_{b - 1}}} \\right)\\int\\limits_{{u_{b - 1}}}^{{u_a}} {d{u_{b - 2}}{p_{{i_{b - 2}}}}\\left( {{u_{b - 2}}} \\right)\\exp \\left( {\\lambda {u_{b - 2}}} \\right) } } } \\nonumber \\\\\n&&\\cdots \\int\\limits_{{u_{a + 2}}}^{{u_a}} {d{u_{a + 1}}{p_{{i_{a + 1}}}}\\left( {{u_{a + 1}}} \\right)\\exp \\left( {\\lambda {u_{a + 1}}} \\right)},\n\\end{eqnarray}\\normalsize\ncan be simply re-written in terms of the function $\\mu\\left(\\cdot,\\cdot\\right)$ as\n\\begin{equation} \\small\\label{eq:IntervalMGF_multiple}\n{{J''}_{a,b}} = \\sum\\limits_{\\left\\{ {{i_{a + 1}}, \\ldots ,{i_{b - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{b - a + 1}}\\left( {{I_N} - \\left\\{ {{i_1}, \\cdots ,{i_a},{i_b}, \\ldots ,{i_N}} \\right\\}} \\right)} {\\prod\\limits_{\\scriptstyle l = a + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{a + 1}}, \\ldots ,{i_{b - 1}}} \\right\\}}^{b - 1} {{\\mu _{{i_l}}}\\left( {{u_b},{u_a},\\lambda } \\right)} }\\quad\\quad\\text{for }b>a.\n\\end{equation}\n\\end{enumerate}\n\n\\section{Sample Cases when All $N$ Ordered RVs are Considered}\n\\begin{theorem} \\label{th:case1_1} (PDF of $\\sum\\limits_{n = 1}^N {u_n }$ among $N$ ordered RVs)\\\\\nLet $Z_1 = \\sum\\limits_{n=1}^{N} {u_n}$ for convenience. We can derive the PDF of $Z=\\left[Z_1\\right]$ as\n\\small\\begin{eqnarray}\n{p_Z}\\left( {{z_1}} \\right) &=& L_{{S_1}}^{ - 1}\\left\\{ {{\\mu _Z}\\left( { - {S_1}} \\right)} \\right\\} \\nonumber \\\\\n&=& \\sum\\limits_{\\left\\{ {{i_1},{i_2}, \\ldots ,{i_N}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _N}\\left( {{I_N}} \\right)} {L_{{S_1}}^{ - 1}\\left\\{ {\\prod\\limits_{\\scriptstyle l = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1},{i_2}, \\ldots ,{i_N}} \\right\\}}^N {{c_{{i_l}}}\\left( {\\infty , - {S_1}} \\right)} } \\right\\}},\n\\end{eqnarray}\\normalsize\nwhere $\\mathcal{L}_{S_1 }^{ - 1}\\{\\cdot\\}$ denotes the inverse Laplace transform with respect to $S_1$. \n\\end{theorem}\n\n\\begin{proof}\nThe MGF of $Z=\\left[Z_1\\right]$ is given by the expectation\n\\small \\begin{eqnarray} \\label{eq:MGF_of_pure_MRC_integralform}\nMGF_Z\\left( {{\\lambda _1}} \\right) &=& E\\left\\{ {\\exp \\left( {{\\lambda _1}{z_1}} \\right)} \\right\\} \\nonumber\n\\\\\n&=& \\sum\\limits_{\\substack{\n {{i_1},{i_2}, \\cdots ,{i_N}} \\\\\n {{i_1} \\ne {i_2} \\ne \\cdots \\ne {i_N}} \\\\\n}}^{1,2, \\cdots ,N} {\\int\\limits_0^\\infty {d{u_1}{p_{{i_1}}}\\left( {{u_1}} \\right)\\exp \\left( {{\\lambda _1}{u_1}} \\right)\\int\\limits_0^{{u_1}} {d{u_2}{p_{{i_2}}}\\left( {{u_2}} \\right)\\exp \\left( {{\\lambda _1}{u_2}} \\right)} } } \\nonumber\n\\\\\n&&\\times \\cdots \\times \\int\\limits_0^{{u_{N - 1}}} {d{u_N}{p_{{i_N}}}\\left( {{u_N}} \\right)\\exp \\left( {{\\lambda _1}{u_N}} \\right)},\n\\end{eqnarray} \\normalsize\nwhere ${\\rm E}\\left\\{ \\cdot \\right\\}$ denotes the expectation operator.\nBy applying (\\ref{eq:CDF_MGF_multiple}), we can obtain the MGF of $Z_1 = \\sum_{n=1}^{N} {u_n}$ as\n\\small\\begin{eqnarray} \\label{eq:MGF_of_pure_MRC}\nMGF_Z\\left( {{\\lambda _1}} \\right) = \\sum\\limits_{\\left\\{ {{i_1},{i_2}, \\ldots ,{i_N}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _N}\\left( {{I_N}} \\right)} {\\prod\\limits_{\\scriptstyle l = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1},{i_2}, \\ldots ,{i_N}} \\right\\}}^N {{c_{{i_l}}}\\left( {\\infty ,{\\lambda _1}} \\right)} }.\n\\end{eqnarray} \\normalsize\nTherefore, we can derive the PDF of $Z_1 = \\sum_{m=1}^{N} {u_{n}}$ by applying the inverse Laplace transform as\n\\small\\begin{eqnarray} \n{p_Z}\\left( {{z_1}} \\right) &=& L_{{S_1}}^{ - 1}\\left\\{ {{\\mu _Z}\\left( { - {S_1}} \\right)} \\right\\} \\nonumber\n\\\\\n&=& \\sum\\limits_{\\left\\{ {{i_1},{i_2}, \\ldots ,{i_N}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _N}\\left( {{I_N}} \\right)} {L_{{S_1}}^{ - 1}\\left\\{ {\\prod\\limits_{\\scriptstyle l = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1},{i_2}, \\ldots ,{i_N}} \\right\\}}^N {{c_{{i_l}}}\\left( {\\infty , - {S_1}} \\right)} } \\right\\}}.\n\\end{eqnarray}\\normalsize\n\\end{proof}\n\n\n\\begin{theorem} \\label{th:case1_2}({Joint PDF of $\\sum\\limits_{n = 1}^m {u_n }$ and $\\sum\\limits_{n = m + 1}^N {u_n }$})\n\\\\\nLet $Z_1 = \\sum\\limits_{n = 1}^m {u_n }$ and $Z_2 = \\sum\\limits_{n = m + 1}^N {u_n }$ for convenience, then we can derive the 2-dimensional joint PDF of $Z=\\left[Z_1, Z_2 \\right]$ as\n\\small\\begin{eqnarray} \\label{eq:joint_PDF_3}\n{p_Z}\\left( {{z_1},{z_2}} \\right) \\!\\!&=& \\!\\!L_{{S_1},{S_2}}^{ - 1}\\left\\{ {{\\mu _Z}\\left( { - {S_1}, - {S_2}} \\right)} \\right\\} \\nonumber \\\\\n&=& \\!\\!\\sum\\limits_{{i_m} = 1}^N {\\int\\limits_0^\\infty {d{u_m}{p_{{i_m}}}\\left( {{u_m}} \\right)} } \\sum\\limits_{\\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{m - 1}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\}} \\right)} {L_{{S_1}}^{ - 1}\\left\\{ {\\prod\\limits_{\\scriptstyle k = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}}^{m - 1}\\!\\! {{e_{{i_k}}}\\left( {{u_m}, - {S_1}} \\right)\\exp \\left( { - {S_1}{u_m}} \\right)} } \\right\\}} \\nonumber \\\\\n&&\\!\\!\\times \\sum\\limits_{\\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{N - m}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\} - \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}} \\right)} {L_{{S_2}}^{ - 1}\\left\\{ {\\prod\\limits_{\\scriptstyle l = m + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\}}^N \\!\\!{{c_{{i_l}}}\\left( {{u_m}, - {S_2}} \\right)} } \\right\\}} \\nonumber\n\\\\\n&& {\\rm for}\\; z_1\\ge \\frac{m}{N-m}z_2.\n\\end{eqnarray}\\normalsize\n\\end{theorem}\n\\begin{proof}\nThe second order MGF of $Z=\\left[Z_1,Z_2\\right]$ is given by the expectation\n\\small\\begin{eqnarray} \\label{eq:joint_MGF_7_integralform}\nMGF_Z\\left( {{\\lambda _1},{\\lambda _2}} \\right) &=& \\sum\\limits_{\\substack{\n {{i_1},{i_2}, \\cdots ,{i_N}} \\\\\n {{i_1} \\ne {i_2} \\ne \\cdots \\ne {i_N}} \\\\\n}}^{1,2, \\cdots ,N} {\\int\\limits_0^\\infty {d{u_1}{p_{{i_1}}}\\left( {{u_1}} \\right)\\exp \\left( {{\\lambda _1}{u_1}} \\right) \\cdots \\int\\limits_0^{{u_{m - 1}}} {d{u_m}{p_{{i_m}}}\\left( {{u_m}} \\right)\\exp \\left( {{\\lambda _1}{u_m}} \\right)} } } \\nonumber \\\\\n&&\\times \\int\\limits_0^{{u_m}} {d{u_{m + 1}}{p_{{i_{m + 1}}}}\\left( {{u_{m + 1}}} \\right)\\exp \\left( {{\\lambda _2}{u_{m + !}}} \\right) \\cdots \\int\\limits_0^{{u_{N - 1}}} {d{u_N}{p_{{i_N}}}\\left( {{u_N}} \\right)\\exp \\left( {{\\lambda _2}{u_N}} \\right)} }.\n\\end{eqnarray}\\normalsize\nWe show in Appendix~\\ref{AP:E} that by applying (\\ref{eq:CDF_MGF_multiple}) and \\cite[Eq. (2)]{kn:unified_approach} and then (\\ref{eq:EDF_MGF_multiple}), we can obtain the second order MGF of $Z$ as\n\\small\\begin{eqnarray} \\label{eq:joint_MGF_2}\nMGF_Z \\left( {\\lambda _1 ,\\lambda _2 } \\right) \\!\\!\\!&=& \\!\\!\\!\\sum\\limits_{{i_m} = 1}^N {\\int\\limits_0^\\infty {d{u_m}{p_{{i_m}}}\\left( {{u_m}} \\right)\\exp \\left( {{\\lambda _1}{u_m}} \\right)} } \\nonumber \\\\\n&&\\!\\!\\times \\sum\\limits_{\\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{m - 1}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\}} \\right)} {\\prod\\limits_{\\scriptstyle k = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}}^{m - 1} {{e_{{i_k}}}\\left( {{u_m},{\\lambda _1}} \\right)} } \\nonumber \\\\\n&&\\!\\!\\times \\sum\\limits_{\\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{N - m}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\} - \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}} \\right)} {\\prod\\limits_{\\scriptstyle l = m + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\}}^N {{c_{{i_l}}}\\left( {{u_m},{\\lambda _2}} \\right)} }.\n\\end{eqnarray}\\normalsize\n\nAgain, letting $\\lambda _1 = - S_1$ and $\\lambda _2 = - S_2$, we can obtain the desired 2-dimensional joint PDF of $Z_1 = \\sum\\limits_{n = 1}^m {u_n }$ and $Z_2 = \\sum\\limits_{n = m + 1}^N {u_n }$ by applying the inverse Laplace transform as\n\\small\\begin{eqnarray}\n{p_Z}\\left( {{z_1},{z_2}} \\right) \\!\\!\\!&=& \\!\\!\\!L_{{S_1},{S_2}}^{ - 1}\\left\\{ {{\\mu _Z}\\left( { - {S_1}, - {S_2}} \\right)} \\right\\} \\nonumber \\\\\n&=& \\!\\!\\!\\sum\\limits_{{i_m} = 1}^N {\\int\\limits_0^\\infty {d{u_m}{p_{{i_m}}}\\left( {{u_m}} \\right)} } \\sum\\limits_{\\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{m - 1}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\}} \\right)} {L_{{S_1}}^{ - 1}\\left\\{ {\\prod\\limits_{\\scriptstyle k = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}}^{m - 1} {{e_{{i_k}}}\\left( {{u_m}, - {S_1}} \\right)\\exp \\left( { - {S_1}{u_m}} \\right)} } \\right\\}} \\nonumber \\\\\n&&\\!\\!\\!\\times \\sum\\limits_{\\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{N - m}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\} - \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}} \\right)} {L_{{S_2}}^{ - 1}\\left\\{ {\\prod\\limits_{\\scriptstyle l = m + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\}}^N {{c_{{i_l}}}\\left( {{u_m}, - {S_2}} \\right)} } \\right\\}}.\n\\end{eqnarray}\\normalsize\n\\end{proof}\n\n\n\n\\begin{theorem} (Joint PDF of $u_m$ and $\\sum\\limits_{\\scriptstyle n = 1 \\hfill \\atop \\scriptstyle n \\ne m \\hfill}^N {u_n } $)\n\\\\\nLet ${Z_1} = {u_m}$ and ${Z_2} = \\sum\\limits_{\\scriptstyle n = 1 \\atop \\scriptstyle n \\ne m}^N {{u_n}}$ for convenience. We can obtain the 2-dimensional joint PDF of $Z=\\left[Z_1, Z_2 \\right]$ as\n\\small\\begin{eqnarray}\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&{p_Z}\\left( {{z_1},{z_2}} \\right) = L_{{S_1},{S_2}}^{ - 1}\\left\\{ {{\\mu _Z}\\left( { - {S_1}, - {S_2}} \\right)} \\right\\} \\nonumber \\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&= \\sum\\limits_{{i_m} = 1}^N {\\int\\limits_0^\\infty {d{u_m}{p_{{i_m}}}\\left( {{u_m}} \\right)L_{{S_1}}^{ - 1}\\left\\{ {\\exp \\left( { - {S_1}{u_m}} \\right)} \\right\\}} } \\sum\\limits_{\\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{m - 1}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\}} \\right)} \\nonumber \\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\n&&\\quad\\times {\\sum\\limits_{\\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{N - m}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\} - \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}} \\right)} {L_{{S_2}}^{ - 1}\\left\\{ {\\prod\\limits_{\\scriptstyle k = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}}^{m - 1} {{e_{{i_k}}}\\left( {{u_m}, - {S_2}} \\right)} \\prod\\limits_{\\scriptstyle l = m + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\}}^N {{c_{{i_l}}}\\left( {{u_m}, - {S_2}} \\right)} } \\right\\}} } .\n\\end{eqnarray}\\normalsize\n\n\\end{theorem}\n\n\\begin{proof}\nSimilarly to {\\it Theorem \\ref{th:case1_1}} and {\\it \\ref{th:case1_2}}, by applying (\\ref{eq:CDF_MGF_multiple}), \\cite[Eq. (2)]{kn:unified_approach} and (\\ref{eq:EDF_MGF_multiple}), we can obtain the second order MGF of $Z_1 = u_m $ and $Z_2 = \\sum\\limits_{\\scriptstyle n = 1 \\hfill \\atop \\scriptstyle n \\ne m \\hfill}^N {u_n }$. Detailed derivation is omitted.\n\\end{proof}\n\n\n\n\\section{Sample cases when only $N_s$ ordered RVs are considered}\nLet us now consider the cases where only the best $N_s$($\\le N$) ordered RVs are involved. \n\n\n\\begin{theorem} \\label{th:case2_1} ({PDF of $\\sum\\limits_{n = 1}^{N_s } {u_n }$}, $N_s\\ge 2$)\n\\\\\nLet $Z' = \\sum\\limits_{n=1}^{N_s} {u_n}$ for convenience, then we can derive the PDF of $Z'$ as\n\\small\\begin{eqnarray} \\label{eq:PDF_of_pure_GSC}\np_{Z'} \\left( x \\right) = p_{\\sum_{n = 1}^{N_s } {u_n } } \\left( x \\right)=\n\\int_0^{\\frac{x}{{N_s }}} {p_Z \\left( {x - z_2 ,z_2 } \\right)dz_2 } & \\text{for } N_s \\ge 2, \n\\end{eqnarray}\\normalsize\n\\end{theorem}\nwhere\n\\small\\begin{eqnarray} \\label{eq:PDF_of_pure_GSC_m}\n{p_Z}\\left( {{z_1},{z_2}} \\right) &=& L_{{S_1},{S_2}}^{ - 1}\\left\\{ {{\\mu _Z}\\left( { - {S_1}, - {S_2}} \\right)} \\right\\} \\nonumber \\\\\n&=& \\sum\\limits_{{i_{{N_s}}} = 1}^N {\\int\\limits_0^\\infty {d{u_{{N_s}}}{p_{{i_{{N_s}}}}}\\left( {{u_{{N_s}}}} \\right)L_{{S_2}}^{ - 1}\\left\\{ {\\exp \\left( { - {S_2}{u_{{N_s}}}} \\right)} \\right\\}\\sum\\limits_{\\substack{\n {i_{{N_s} + 1}}, \\ldots ,{i_N} \\\\ \n {i_{{N_s} + 1}} \\ne \\cdots \\ne {i_N} \\\\ \n {i_{{N_s} + 1}} \\ne {i_{{N_s}}} \\\\ \n \\vdots \\\\ \n {i_N} \\ne {i_{{N_s}}} \\\\ \n }}^{1,2, \\ldots ,N} {\\prod\\limits_{\\scriptstyle k = {N_s} + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{{N_s} + 1}}, \\ldots ,{i_N}} \\right\\}}^N {{P_{{i_k}}}\\left( {{u_{{N_s}}}} \\right)} } } } \\nonumber \\\\\n&&\\times \\sum\\limits_{\\left\\{ {{i_1}, \\ldots ,{i_{{N_s} - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{{N_s} - 1}}\\left( {{I_N} - \\left\\{ {{i_{{N_s}}}} \\right\\} - \\left\\{ {{i_{{N_s} + 1}}, \\ldots ,{i_N}} \\right\\}} \\right)} {L_{{S_1}}^{ - 1}\\left\\{ {\\prod\\limits_{\\scriptstyle l = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1}, \\ldots ,{i_{{N_s} - 1}}} \\right\\}}^{{N_s} - 1} {{e_{{i_l}}}\\left( {{u_{{N_s}}}, - {S_1}} \\right)} } \\right\\}}.\n\\end{eqnarray}\\normalsize\n\n\\begin{proof}\nWe only need to consider $u_{N_s}$ separately in this case. \nLet $Z_1=\\sum\\limits_{n=1}^{N_s-1} {u_n}$ and $Z_2=u_{N_s}$. The target second order MGF of $Z=[Z_1, Z_2]$ is given by the expectation in \n\\small\\begin{eqnarray} \\label{eq:MGF_of_pure_GSC_integralform}\nMGF_Z \\left( {\\lambda _{1,} \\lambda _2 } \\right) &=& {\\rm E}\\left\\{ {\\exp \\left( {\\lambda _1 z_1 + \\lambda _2 z_2 } \\right)} \\right\\} \\nonumber \\\\\n&=&\\sum\\limits_{\\substack{\n {{i_1},{i_2}, \\cdots ,{i_N}} \\\\\n {{i_1} \\ne {i_2} \\ne \\cdots \\ne {i_N}} \\\\\n}}^{1,2, \\cdots ,N} {\\int\\limits_0^\\infty {d{u_1}{p_{{i_1}}}\\left( {{u_1}} \\right)\\exp \\left( {{\\lambda _1}{u_1}} \\right) \\cdots \\int\\limits_0^{{u_{{N_s} - 2}}} {d{u_{{N_s} - 1}}{p_{{i_{{N_s} - 1}}}}\\left( {{u_{{N_s} - 1}}} \\right)\\exp \\left( {{\\lambda _1}{u_{{N_s} - 1}}} \\right)} } } \\nonumber \\\\\n&&\\times \\int\\limits_0^{{u_{{N_s} - 1}}} {d{u_{{N_s}}}{p_{{i_{{N_s}}}}}\\left( {{u_{{N_s}}}} \\right)\\exp \\left( {{\\lambda _2}{u_{{N_s}}}} \\right)\\prod\\limits_{j = {N_s} + 1}^N {{P_{{i_j}}}\\left( {{u_{{N_s}}}} \\right)} }.\n\\end{eqnarray}\\normalsize\nBy simply applying \\cite[Eq. (2)]{kn:unified_approach} and then (\\ref{eq:EDF_MGF_multiple}) to (\\ref{eq:MGF_of_pure_GSC_integralform}), we can obtain the second order MGF result as\n\\small\\begin{eqnarray} \\label{MGF_pure_GSC}\nMGF_Z \\left( {\\lambda _{1,} \\lambda _2 } \\right) \\!\\!&=&\\!\\! \\sum\\limits_{{i_{{N_s}}} = 1}^N {\\int\\limits_0^\\infty {d{u_{{N_s}}}{p_{{i_{{N_s}}}}}\\left( {{u_{{N_s}}}} \\right)\\exp \\left( {{\\lambda _2}{u_{{N_s}}}} \\right)\\sum\\limits_{\\substack{\n {i_{{N_s} + 1}}, \\ldots ,{i_N} \\\\ \n {i_{{N_s} + 1}} \\ne \\cdots \\ne {i_N} \\\\ \n {i_{{N_s} + 1}} \\ne {i_{{N_s}}} \\\\ \n \\vdots \\\\ \n {i_N} \\ne {i_{{N_s}}} \\\\ \n }}^{1,2, \\ldots ,N} {\\prod\\limits_{\\scriptstyle k = {N_s} + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{{N_s} + 1}}, \\ldots ,{i_N}} \\right\\}}^N {{P_{{i_k}}}\\left( {{u_{{N_s}}}} \\right)} } } } \\nonumber \\\\\n&&\\!\\!\\times \\sum\\limits_{\\left\\{ {{i_1}, \\ldots ,{i_{{N_s} - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{{N_s} - 1}}\\left( {{I_N} - \\left\\{ {{i_{{N_s}}}} \\right\\} - \\left\\{ {{i_{{N_s} + 1}}, \\ldots ,{i_N}} \\right\\}} \\right)} {\\prod\\limits_{\\scriptstyle l = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1}, \\ldots ,{i_{{N_s} - 1}}} \\right\\}}^{{N_s} - 1} {{e_{{i_l}}}\\left( {{u_{{N_s}}},{\\lambda _1}} \\right)} }.\n\\end{eqnarray} \\normalsize\nAgain, letting $\\lambda _1 = - S_1$ and $\\lambda _2 = - S_2$, we can obtain the 2-dimensional joint PDF of $Z_1=\\sum\\limits_{n=1}^{N_s-1} {u_n}$ and $Z_2=u_{N_s}$ by applying the inverse Laplace transform as\n\\small\\begin{eqnarray}\n{p_Z}\\left( {{z_1},{z_2}} \\right)\\!\\!&=& \\!\\!L_{{S_1},{S_2}}^{ - 1}\\left\\{ {{\\mu _Z}\\left( { - {S_1}, - {S_2}} \\right)} \\right\\} \\nonumber \\\\\n&=&\\!\\! \\sum\\limits_{{i_{{N_s}}} = 1}^N {\\int\\limits_0^\\infty {d{u_{{N_s}}}{p_{{i_{{N_s}}}}}\\left( {{u_{{N_s}}}} \\right)L_{{S_2}}^{ - 1}\\left\\{ {\\exp \\left( { - {S_2}{u_{{N_s}}}} \\right)} \\right\\}\\sum\\limits_{\\substack{\n {i_{{N_s} + 1}}, \\ldots ,{i_N} \\\\ \n {i_{{N_s} + 1}} \\ne \\cdots \\ne {i_N} \\\\ \n {i_{{N_s} + 1}} \\ne {i_{{N_s}}} \\\\ \n \\vdots \\\\ \n {i_N} \\ne {i_{{N_s}}} \\\\ \n }}^{1,2, \\ldots ,N} {\\prod\\limits_{\\scriptstyle k = {N_s} + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{{N_s} + 1}}, \\ldots ,{i_N}} \\right\\}}^N {{P_{{i_k}}}\\left( {{u_{{N_s}}}} \\right)} } } } \\nonumber \\\\\n&&\\!\\! \\times \\sum\\limits_{\\left\\{ {{i_1}, \\ldots ,{i_{{N_s} - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{{N_s} - 1}}\\left( {{I_N} - \\left\\{ {{i_{{N_s}}}} \\right\\} - \\left\\{ {{i_{{N_s} + 1}}, \\ldots ,{i_N}} \\right\\}} \\right)} {L_{{S_1}}^{ - 1}\\left\\{ {\\prod\\limits_{\\scriptstyle l = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1}, \\ldots ,{i_{{N_s} - 1}}} \\right\\}}^{{N_s} - 1} {{e_{{i_l}}}\\left( {{u_{{N_s}}}, - {S_1}} \\right)} } \\right\\}}.\n\\end{eqnarray}\\normalsize\nFinally, noting that $Z'=Z_1+Z_2$, we can obtain the target PDF of $Z'$ with the following finite integration\n\\begin{equation} \\small\np_{Z'}(x)=\\int_0^{\\frac{x}{{N_s }}} {p_Z \\left( {x - z_2 ,z_2 } \\right)dz_2 }.\n\\end{equation}\n\\end{proof}\n\n\\begin{theorem} \\label{th:case2_2} ({Joint PDF of $u_{m}$ and $\\sum\\limits_{\\scriptstyle n = 1 \\hfill \\atop \\scriptstyle n \\ne m \\hfill}^{N_s } {u_{n} }$} for $1 < m < N_s - 1$)\n\nLet $X=u_n$ and $Y=\\sum\\limits_{\\scriptstyle n = 1 \\hfill \\atop \\scriptstyle n \\ne m \\hfill}^{N_s } {u_n }$, then the joint PDF of $Z=\\left[X, Y\\right]$ can be obtained as\n\\small\\begin{eqnarray} \\label{eq:joint_PDF_1}\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\! p_{Z}\\left(x, y\\right) &=& p_{u_m ,\\sum\\limits_{\\scriptstyle n = 1 \\hfill \\atop \n \\scriptstyle n \\ne m \\hfill}^{N_s } {u_n } } \\left( {x,y} \\right) \\nonumber\n\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\! &=&\\!\\!\\!\\!\\!\\!\\!\n\\int_0^x {\\int_{\\left( {m - 1} \\right)x}^{y - \\left( {N_s - m} \\right)z_4 } {p_{\\sum\\limits_{n = 1}^{m - 1} {u_n } , u_m ,\\sum\\limits_{n = m + 1}^{N_s - 1} {u_n } , u_{N_s} } \\!\\!\\!\\!\\!\\!\\! \\left( {z_1 ,x,y \\!-\\!z_1\\!-\\!z_4 ,z_4 } \\right)dz_1 } dz_4 }.\n\\end{eqnarray}\\normalsize\n\\end{theorem}\n\n\\begin{proof}\nFor the joint PDF of $u_m$ and $\\sum\\limits_{\\scriptstyle n = 1 \\hfill \\atop \\scriptstyle n \\ne m \\hfill}^{N_s } {u_n }$, as one of original groups is split by $u_m$, we should consider substituted groups for the split group instead of original groups as shown in Fig.~\\ref{Example_3_2}. As a result, we will start by obtaining a four order MGF. In this case, the higher dimensional joint PDF can then be used to find the desired 2-dimensional joint PDF of interest by transformation.\n\nApplying the results in \\cite[Eq. (2)]{kn:unified_approach}, (\\ref{eq:CDF_MGF_multiple}), (\\ref{eq:EDF_MGF_multiple}) and (\\ref{eq:IntervalMGF_multiple}), we derive in Appendix~\\ref{AP:G} the target joint MGF.\nLet $Z_1 = \\sum\\limits_{n = 1}^{m - 1} {u_n }$, $Z_2 = u_m$, $Z_3 = \\sum\\limits_{n = m + 1}^{N_s - 1} {u_n }$, and $Z_4 = u_{N_s}$, then\n\\small \\begin{eqnarray} \\label{eq:joint_MGF_GSC_2}\n\\!\\!\\!\\!\\!\\!MGF_Z \\left( {\\lambda _1 ,\\lambda _2 ,\\lambda _3 ,\\lambda _4 } \\right) \\!\\!\\!\\!&=&\\!\\!\\!\\! \\sum\\limits_{\\scriptstyle {i_{{N_s}}}, \\ldots ,{i_N} \\atop \n \\scriptstyle {i_{{N_s}}} \\ne \\cdots \\ne {i_N}}^{1,2, \\ldots ,N} {\\int\\limits_0^\\infty {d{u_{{N_s}}}{p_{{i_{{N_s}}}}}\\left( {{u_{{N_s}}}} \\right)\\exp \\left( {{\\lambda _4}{u_{{N_s}}}} \\right)\\!\\!\\prod\\limits_{\\scriptstyle j = {N_s} + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{{N_s} + 1}}, \\ldots ,{i_N}} \\right\\}}^N {{P_{{i_j}}}\\left( {{u_{{N_s}}}} \\right)} } } \\nonumber \\\\\n\\!\\!\\!\\!\\!\\!&&\\!\\!\\!\\!\\times \\sum\\limits_{\\scriptstyle {i_m} = 1 \\atop \n \\scriptstyle {i_m} \\ne {i_{{N_s}, \\ldots ,}}{i_N}}^N {\\int\\limits_{{u_{{N_s}}}}^\\infty {d{u_m}{p_{{i_m}}}\\left( {{u_m}} \\right)\\exp \\left( {{\\lambda _2}{u_m}} \\right)} } \\nonumber \\\\\n\\!\\!\\!\\!\\!\\!&&\\!\\!\\!\\!\\times \\sum\\limits_{\\left\\{ {{i_{m + 1}}, \\ldots ,{i_{{N_s} - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{{N_s} - m - 1}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\} - \\left\\{ {{i_{{N_s}}}, \\ldots ,{i_N}} \\right\\}} \\right)} {\\prod\\limits_{\\scriptstyle k = m + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{m + 1}}, \\ldots ,{i_{{N_s} - 1}}} \\right\\}}^{{N_s} - 1} {{\\mu _{{i_k}}}\\left( {{u_{{N_s}}},{u_m},{\\lambda _3}} \\right)} } \\nonumber \\\\\n\\!\\!\\!\\!\\!\\!&&\\!\\!\\!\\!\\times \\sum\\limits_{\\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{m - 1}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\} - \\left\\{ {{i_{{N_s}}}, \\ldots ,{i_N}} \\right\\} - \\left\\{ {{i_{m + 1}}, \\ldots ,{i_{{N_s} - 1}}} \\right\\}} \\right)} {\\prod\\limits_{\\scriptstyle l = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}}^{m - 1}\\!\\! {{e_{{i_l}}}\\left( {{u_m},{\\lambda _1}} \\right)} }.\n\\end{eqnarray} \\normalsize\n\n\nStarting from the MGF expressions given above, we apply inverse Laplace transforms in Appendix~\\ref{AP:G} in order to derive the following joint PDFs\n\n\\small \\begin{eqnarray} \\label{eq:joint_PDF_GSC_2} \n{p_Z}\\left( {{z_1},{z_2},{z_3},{z_4}} \\right) \\!\\!\\!&=& \\!\\!\\!L_{{S_1},{S_2},{S_3},{S_4}}^{ - 1}\\left\\{ {{\\mu _Z}\\left( { - {S_1}, - {S_2}, - {S_3}, - {S_4}} \\right)} \\right\\} \\nonumber \\\\\n&=&\\!\\!\\! \\sum\\limits_{\\scriptstyle {i_{{N_s}}}, \\ldots ,{i_N} \\atop \n \\scriptstyle {i_{{N_s}}} \\ne \\cdots \\ne {i_N}}^{1,2, \\ldots ,N} {\\int\\limits_0^\\infty {d{u_{{N_s}}}{p_{{i_{{N_s}}}}}\\left( {{u_{{N_s}}}} \\right)L_{{S_4}}^{ - 1}\\left\\{ {\\exp \\left( { - {S_4}{u_{{N_s}}}} \\right)} \\right\\}\\prod\\limits_{\\scriptstyle j = {N_s} + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{{N_s} + 1}}, \\ldots ,{i_N}} \\right\\}}^N {{P_{{i_j}}}\\left( {{u_{{N_s}}}} \\right)} } } \\nonumber \\\\\n&&\\!\\!\\!\\times \\sum\\limits_{\\scriptstyle {i_m} = 1 \\atop \n \\scriptstyle {i_m} \\ne {i_{{N_s}, \\ldots ,}}{i_N}}^N {\\int\\limits_{{u_{{N_s}}}}^\\infty {d{u_m}{p_{{i_m}}}\\left( {{u_m}} \\right)L_{{S_2}}^{ - 1}\\left\\{ {\\exp \\left( { - {S_2}{u_m}} \\right)} \\right\\}} } \\nonumber \\\\\n&&\\!\\!\\!\\times \\sum\\limits_{\\left\\{ {{i_{m + 1}}, \\ldots ,{i_{{N_s} - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{{N_s} - m - 1}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\} - \\left\\{ {{i_{{N_s}}}, \\ldots ,{i_N}} \\right\\}} \\right)} \\!\\! {L_{{S_3}}^{ - 1}\\!\\left\\{\\! {\\prod\\limits_{\\scriptstyle k = m + 1 \\atop \n \\scriptstyle \\left\\{\\! {{i_{m + 1}}, \\ldots ,{i_{{N_s} - 1}}} \\!\\right\\}}^{{N_s} - 1} \\!{{\\mu _{{i_k}}}\\left(\\! {{u_{{N_s}}},{u_m}, - {S_3}} \\!\\right)} } \\!\\right\\}} \\nonumber \\\\\n&&\\!\\!\\!\\times \\!\\sum\\limits_{\\left\\{\\! {{i_1}, \\ldots ,{i_{m - 1}}} \\!\\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{m - 1}}\\left(\\! {{I_N} - \\left\\{\\! {{i_m}} \\!\\right\\} - \\left\\{\\! {{i_{{N_s}}}, \\ldots ,{i_N}} \\!\\right\\} - \\left\\{\\! {{i_{m + 1}}, \\ldots ,{i_{{N_s} - 1}}}\\! \\right\\}} \\!\\right)} \\!\\! {L_{{S_1}}^{ - 1}\\left\\{\\! {\\prod\\limits_{\\scriptstyle l = 1 \\atop \n \\scriptstyle \\left\\{\\! {{i_1}, \\ldots ,{i_{m - 1}}} \\!\\right\\}}^{m - 1} {{e_{{i_l}}}\\left(\\! {{u_m}, - {S_1}} \\!\\right)} } \\!\\right\\}}, \\nonumber\n\\\\\n&&\\text{for } z_4(m-1)z_2\\;\\text{and}\\; \\left(N_s-m-1\\right)z_4\\frac{m}{N_s-m}y.\n\\end{eqnarray} \\normalsize\n\\end{theorem}\n\\begin{proof}Omitted.\n\\end{proof}\nNote again that only the finite integrations of joint PDFs are involved.\n\n\\section{Closed-form expressions for exponential RV case} \\label{SEC:VI}\nNow, we focus on obtaining the joint PDFs for i.n.d. exponential RV special cases in a ready-to-use form. The PDF and the CDF of the RVs are given by ${p_{{i_l}}}\\left( x \\right) = \\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}\\exp \\left( { - \\frac{x}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right)$ and $P_{{i_l}}\\left( x \\right) = 1-\\exp \\left( { - \\frac{x}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right)$ for $\\gamma\\ge 0$, respectively, where ${\\bar \\gamma }_{i_l}$ is the average of the $l$-th RV.\n\nThe above novel generic results are quite general and apply to any RVs. We now focus on obtaining the joint PDFs for i.n.d. exponential RV special cases in a ready-to-use form and illustrate in this section some results for the independent non-identical exponential RV special case, where the PDF and the CDF of $\\gamma$ are given by ${p_{{i_l}}}\\left( x \\right) = \\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}\\exp \\left( { - \\frac{x}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right)$ and $P_{{i_l}}\\left( x \\right) = 1-\\exp \\left( { - \\frac{x}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right)$ for $\\gamma\\ge 0$, respectively, where ${\\bar \\gamma }_{i_l}$ is the average of the $l$-th RV. \nAs shown in Appendix~\\ref{AP:H}, (\\ref{eq:CDF_MGF_multiple}), (\\ref{eq:EDF_MGF_multiple}) and (\\ref{eq:IntervalMGF_multiple}) can be specialized to\n\\begin{enumerate}\n\\item[i)] For special case:\n\\begin{equation}\\small \\label{eq:common_function_Rayleigh_1_s}\n {c_{{i_l}}}\\left( {{z_a},\\lambda } \\right) = \\frac{1}{{1 - {{\\bar \\gamma }_{{i_l}}}\\lambda }}\\left[ {1 - \\exp \\left( {\\left( {\\lambda - \\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right){z_a}} \\right)} \\right], \n\\end{equation}\n\\begin{equation} \\small \\label{eq:common_function_Rayleigh_2_s}\n {e_{{i_l}}}\\left( {{z_a},\\lambda } \\right) = \\frac{1}{{1 - {{\\bar \\gamma }_{{i_l}}}\\lambda }}\\left[ {\\exp \\left( {\\left( {\\lambda - \\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right){z_a}} \\right)} \\right], \n\\end{equation}\n\\begin{equation} \\small \\label{eq:common_function_Rayleigh_3_s}\n {\\mu _{{i_l}}}\\left( {{z_a},{z_b},\\lambda } \\right) = \\frac{1}{{1 - {{\\bar \\gamma }_{{i_l}}}\\lambda }}\\left[ {\\exp \\left( {\\left( {\\lambda - \\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right){z_b}} \\right) - \\exp \\left( {\\left( {\\lambda - \\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right){z_a}} \\right)} \\right]. \n\\end{equation}\n\\item[ii)] For general case:\n\\small\n\\begin{eqnarray} \\label{eq:common_function_Rayleigh_1}\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&& \\prod\\limits_{l = {n_1}}^{{n_2}} {{c_{{i_l}}}\\left( {{z_a},\\lambda } \\right)} = \\frac{1}{{\\prod\\limits_{l = {n_1}}^{{n_2}} {\\left( {1 - {{\\bar \\gamma }_{{i_l}}}\\lambda } \\right)} }}\\prod\\limits_{l = {n_1}}^{{n_2}} {\\left[ {1 - \\exp \\left( {\\left( {\\lambda - \\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right){z_a}} \\right)} \\right]} \\nonumber\\\\ \n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&= \\!\\sum\\limits_{k = {n_1}}^{{n_2}} \\!{{C_{k,n_1,n_2}}\\left[\\! {\\frac{{1 + \\left[\\! {\\sum\\limits_{l = 1}^{{n_2} - {n_1} + 1} {\\exp \\left(\\! {l \\cdot {z_a} \\cdot \\lambda } \\!\\right)\\left\\{\\! {{{\\left( { - 1} \\right)}^l}\\sum\\limits_{{j_1} = {j_0} + {n_1}}^{{n_2} - l + 1} { \\cdots \\sum\\limits_{{j_l} = {j_{l - 1}} + 1}^{{n_2}} {\\exp \\left(\\! { - \\sum\\limits_{m = 1}^l {\\frac{{{z_a}}}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}} } \\!\\right)} } } \\!\\right\\}} } \\!\\right]}}{{\\left( \\!{\\lambda - \\frac{1}{{{{\\bar \\gamma }_{{i_k}}}}}} \\!\\right)}}} \\!\\right]}, \n\\end{eqnarray}\n\\begin{eqnarray} \\label{eq:common_function_Rayleigh_2}\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\prod\\limits_{l = {n_1}}^{{n_2}} {{e_{{i_l}}}\\left( {{z_a},\\lambda } \\right)} = \\frac{1}{{\\prod\\limits_{l = {n_1}}^{{n_2}} {\\left( {1 - {{\\bar \\gamma }_{{i_l}}}\\lambda } \\right)} }}\\exp \\left( {\\left\\{ {\\sum\\limits_{l = {n_1}}^{{n_2}} {\\left( {\\lambda - \\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right)} } \\right\\}{z_a}} \\right) \\nonumber\\\\ \n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&= \\sum\\limits_{k = {n_1}}^{{n_2}} {\\frac{C_{k,n_1,n_2}}{{\\left( {\\lambda - \\frac{1}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)}}\\exp \\left( { - \\sum\\limits_{l = {n_1}}^{{n_2}} {\\left( {\\frac{{{z_a}}}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right)} } \\right)\\exp \\left( {\\left( {{n_2} - {n_1} + 1} \\right){z_a}\\lambda } \\right)},\n\\end{eqnarray}\n\\begin{eqnarray} \\label{eq:common_function_Rayleigh_3}\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&& \\prod\\limits_{l = {n_1}}^{{n_2}} {{\\mu _{{i_l}}}\\left( {{z_a},{z_b},\\lambda } \\right)} \n= \\frac{1}{{\\prod\\limits_{l = {n_1}}^{{n_2}} {\\left( {1 - {{\\bar \\gamma }_{{i_l}}}\\lambda } \\right)} }}\\prod\\limits_{l = {n_1}}^{{n_2}} {\\left[ {\\exp \\left( {\\left( {\\lambda - \\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right){z_a}} \\right) - \\exp \\left( {\\left( {\\lambda - \\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right){z_b}} \\right)} \\right]} \\nonumber\\\\ \n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&= \\sum\\limits_{k = {n_1}}^{{n_2}} {{C_{k,n_1,n_2}}\\left[ {\\frac{{\\exp \\left( {\\left( {{n_2} - {n_1} + 1} \\right) \\cdot {z_a} \\cdot \\lambda } \\right)\\exp \\left( { - \\sum\\limits_{l = {n_1}}^{{n_2}} {\\left( {\\frac{{{z_a}}}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right)} } \\right)}}{{\\left( {\\lambda - \\frac{1}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)}}} \\right.} \\nonumber\n\\nonumber\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\quad\\left. { \\times \\left\\{ {1 + \\sum\\limits_{l = 1}^{{n_2} - {n_1} + 1} {\\exp \\left( {l \\cdot \\left( {{z_b} - {z_a}} \\right) \\cdot \\lambda } \\right)\\left\\{ {{{\\left( { - 1} \\right)}^l}\\sum\\limits_{{j_1} = {j_0} + {n_1}}^{{n_2} - l + 1} { \\cdots \\sum\\limits_{{j_l} = {j_{l - 1}} + 1}^{{n_2}} {\\exp \\left( { - \\sum\\limits_{m = 1}^l {\\frac{{{z_b} - {z_a}}}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}} } \\right)} } } \\right\\}} } \\right\\}} \\right], \n\\end{eqnarray} \\normalsize\nwhere \n\\begin{equation} \\small\n{C_{l,n_1,n_2}} = \\frac{1}{{\\prod\\limits_{l = {n_1}}^{{n_2}} {\\left( { - {{\\bar \\gamma }_{{i_l}}}} \\right)} }{F'\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right)}},\n\\end{equation}\n\\begin{equation} \\small\nF'\\left( x \\right) \\!= \\left[\\! {\\sum\\limits_{l = 1}^{{n_2} - {n_1}}\\! {\\left(\\! {{n_2} - {n_1} - l + 1} \\!\\right){x^{{n_2} - {n_1} - l}}{{\\left(\\! { - 1} \\!\\right)}^l}\\sum\\limits_{{j_1} = {j_0} + {n_1}}^{{n_2} - l + 1} { \\cdots \\sum\\limits_{{j_l} = {j_{l - 1}} + 1}^{{n_2}} \\!{\\prod\\limits_{m = 1}^l {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}} } } } } \\!\\right] \\!+ \\!\\left(\\! {{n_2} - {n_1} + 1} \\!\\right){x^{{n_2} - {n_1}}}.\n\\end{equation}\n\\end{enumerate}\n\n\n\nAfter substituting (\\ref{eq:common_function_Rayleigh_1}), (\\ref{eq:common_function_Rayleigh_2}) and (\\ref{eq:common_function_Rayleigh_3}) into the derived expressions of the joint PDF of partial sums of ordered statistics presented in the previous sections, it is easy to derive the following closed-form expressions for the PDFs by applying the classical inverse Laplace transform pair and the property given in \\cite[Appendix I]{kn:unified_approach}. While some of these results have been derived using the successive conditioning approach previously, we list them here for the sake of convenience and completeness in the next page.\n\n\\begin{landscape}\n\\begin{enumerate}\n\\item[1)] PDF of $\\sum\\limits_{n = 1}^N {u_n}$:\n\\begin{equation} \\footnotesize\\label{eq:closed_form_1}\n{p_Z}\\left( {{z_1}} \\right)= \\sum\\limits_{\\left\\{ {{i_1},{i_2}, \\ldots ,{i_N}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _N}\\left( {{I_N}} \\right)} {\\sum\\limits_{l = 1}^N {{C_{l,1,N}}L_{{S_1}}^{ - 1}\\left\\{ {\\frac{1}{{\\left( { - {S_1} - \\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right)}}} \\right\\}} },\n\\end{equation}\nwhere\n\\begin{equation}\\footnotesize\nL_{{S_1}}^{ - 1}\\left\\{ {\\frac{1}{{\\left( { - {S_1} - \\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right)}}} \\right\\} = - \\exp \\left( { - \\frac{{{z_1}}}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right).\n\\end{equation}\n\n\\item[2)] Joint PDF of $u_m$ and $\\sum\\limits_{\\scriptstyle n = 1 \\hfill \\atop \\scriptstyle n \\ne m \\hfill}^N {u_n} $:\n\\footnotesize \\begin{eqnarray} \\label{eq:non_closed_form_2}\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&{p_Z}\\left( {{z_1},{z_2}} \\right) = L_{{S_1},{S_2}}^{ - 1}\\left\\{ {{\\mu _Z}\\left( { - {S_1}, - {S_2}} \\right)} \\right\\} \\nonumber \\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&= \\sum\\limits_{{i_m} = 1}^N {\\int\\limits_0^\\infty {d{u_m}\\frac{1}{{{{\\bar \\gamma }_{{i_m}}}}}\\exp \\left( { - \\frac{{{u_m}}}{{{{\\bar \\gamma }_{{i_m}}}}}} \\right)L_{{S_1}}^{ - 1}\\left\\{ {\\exp \\left( { - {S_1}{u_m}} \\right)} \\right\\}} } \\nonumber \\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\quad \\times \\!\\! \\sum\\limits_{\\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{m - 1}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\}} \\right)} \\!\\! {\\sum\\limits_{\\scriptstyle k = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}}^{m - 1} \\!\\! {{C_{k,1,m-1}}\\exp \\left( \\!{ - \\sum\\limits_{l = 1}^{m - 1} {\\left(\\! {\\frac{{{u_m}}}{{{{\\bar \\gamma }_{{i_l}}}}}} \\!\\right)} } \\!\\right)\\! \\sum\\limits_{\\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{N - m}}\\left( \\! {{I_N} - \\left\\{ \\!{{i_m}} \\!\\right\\} - \\left\\{\\! {{i_1}, \\ldots ,{i_{m - 1}}} \\!\\right\\}} \\!\\right)} {\\sum\\limits_{\\scriptstyle q = m + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\}}^N \\!\\! {{C_{q,m+1,N}}L_{{S_2}}^{ - 1}\\left\\{ \\! {\\frac{{\\exp \\left( \\!{ - \\left( \\!{m - 1} \\! \\right){u_m}{S_2}}\\! \\right)}}{{\\left( \\!{ - {S_2} - \\frac{1}{{{{\\bar \\gamma }_{{i_k}}}}}} \\!\\right)\\left( \\!{ - {S_2} - \\frac{1}{{{{\\bar \\gamma }_{{i_q}}}}}} \\!\\right)}}} \\!\\right\\}} } } } \\nonumber \\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\quad + \\sum\\limits_{{i_m} = 1}^N {\\int\\limits_0^\\infty {d{u_m}\\frac{1}{{{{\\bar \\gamma }_{{i_m}}}}}\\exp \\left( { - \\frac{{{u_m}}}{{{{\\bar \\gamma }_{{i_m}}}}}} \\right)L_{{S_1}}^{ - 1}\\left\\{ {\\exp \\left( { - {S_1}{u_m}} \\right)} \\right\\}} } \\sum\\limits_{\\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{m - 1}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\}} \\right)} {\\sum\\limits_{\\scriptstyle k = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}}^{m - 1} {{C_{k,1,m-1}}\\exp \\left( { - \\sum\\limits_{l = 1}^{m - 1} {\\left( {\\frac{{{u_m}}}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right)} } \\right)} } \\nonumber \\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\quad \\times \\!\\! \\sum\\limits_{\\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{N - m}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\} - \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}} \\right)} \\!\\! {\\sum\\limits_{\\scriptstyle q = m + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\}}^N {{C_{q,m+1,N}}\\left[ \\! {\\sum\\limits_{h = 1}^{N - m} \\! {\\left\\{ \\! {{{\\left(\\! { - 1} \\!\\right)}^h}\\! \\sum\\limits_{{j_1} = {j_0} + m + 1}^{N - h + 1} {\\! \\cdots\\! \\sum\\limits_{{j_h} = {j_{h - 1}} + 1}^N {\\exp \\left( \\!{ - \\sum\\limits_{m = 1}^h {\\frac{{{u_m}}}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}} } \\!\\right)} } } \\!\\right\\}L_{{S_2}}^{ - 1}\\left\\{\\! {\\frac{{\\exp \\left( \\!{ - \\left( \\!{h + m - 1} \\!\\right){u_m}{S_2}} \\!\\right)}}{{\\left(\\! { - {S_2} - \\frac{1}{{{{\\bar \\gamma }_{{i_k}}}}}}\\! \\right)\\left(\\! { - {S_2} - \\frac{1}{{{{\\bar \\gamma }_{{i_q}}}}}} \\!\\right)}}} \\!\\right\\}} } \\!\\right]} },\n\\end{eqnarray} \\normalsize\nwhere\n\\begin{equation} \\footnotesize\nL_{{S_1}}^{ - 1}\\left\\{ {\\exp \\left( { - {S_1}{u_m}} \\right)} \\right\\} = \\delta \\left( {{z_1} - {u_m}} \\right),\n\\end{equation}\n\\begin{equation} \\footnotesize \n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!L_{{S_2}}^{ - 1}\\left\\{ {\\frac{{\\exp \\left( { - \\left( {m - 1} \\right){u_m}{S_2}} \\right)}}{{\\left( { - {S_2} - \\frac{1}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)\\left( { - {S_2} - \\frac{1}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right)}}} \\right\\} = \\frac{{\\exp \\left( { - \\left( {{z_2} - \\left( {m - 1} \\right){u_m}} \\right)\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_k}}}}} + \\frac{1}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right)} \\right)\\left\\{ {\\exp \\left( {\\frac{{{z_2} - \\left( {m - 1} \\right){u_m}}}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right) - \\exp \\left( {\\frac{{{z_2} - \\left( {m - 1} \\right){u_m}}}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)} \\right\\}U\\left( {{z_2} - \\left( {m - 1} \\right){u_m}} \\right)}}{{\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_q}}}}} - \\frac{1}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)}},\n\\end{equation}\n\\begin{equation} \\footnotesize \n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!L_{{S_2}}^{ - 1}\\left\\{ {\\frac{{\\exp \\left( { - \\left( {h + m - 1} \\right){u_m}{S_2}} \\right)}}{{\\left( { - {S_2} - \\frac{1}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)\\left( { - {S_2} - \\frac{1}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right)}}} \\right\\} = \\frac{{\\exp \\left( { - \\left( {{z_2} - \\left( {h + m - 1} \\right){u_m}} \\right)\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_k}}}}} + \\frac{1}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right)} \\right)\\left\\{ {\\exp \\left( {\\frac{{{z_2} - \\left( {h + m - 1} \\right){u_m}}}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right) - \\exp \\left( {\\frac{{{z_2} - \\left( {h + m - 1} \\right){u_m}}}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)} \\right\\}U\\left( {{z_2} - \\left( {h + m - 1} \\right){u_m}} \\right)}}{{\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_q}}}}} - \\frac{1}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)}}.\n\\end{equation}\n\n\\item[3)] Joint PDF of $\\sum\\limits_{n = 1}^m {u_n}$ and $\\sum\\limits_{n = m + 1}^N {u_n}$:\n\\footnotesize \\begin{eqnarray} \\label{eq:non_closed_form_3}\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&& {p_Z}\\left( {{z_1},{z_2}} \\right) = L_{{S_1},{S_2}}^{ - 1}\\left\\{ {{\\mu _Z}\\left( { - {S_1}, - {S_2}} \\right)} \\right\\} \\nonumber \\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&= \\sum\\limits_{{i_m} = 1}^N {\\int\\limits_0^\\infty {d{u_m}\\frac{1}{{{{\\bar \\gamma }_{{i_m}}}}}\\exp \\left( { - \\frac{{{u_m}}}{{{{\\bar \\gamma }_{{i_m}}}}}} \\right)} } \\nonumber \\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\quad \\times \\sum\\limits_{\\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{m - 1}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\}} \\right)} {\\sum\\limits_{\\scriptstyle k = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}}^{m - 1} {{C_{k,1,m-1}}\\exp \\left( { - \\sum\\limits_{l = 1}^{m - 1} {\\left( {\\frac{{{u_m}}}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right)} } \\right)L_{{S_1}}^{ - 1}\\left\\{ {\\frac{{\\exp \\left( { - m{u_m}{S_1}} \\right)}}{{\\left( { - {S_1} - \\frac{1}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)}}} \\right\\}\\sum\\limits_{\\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{N - m}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\} - \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}} \\right)} } } \\nonumber \\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\quad \\times \\sum\\limits_{\\scriptstyle q = m + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\}}^N {C_{q,m+1,N}}L_{{S_2}}^{ - 1}\\left\\{ {\\frac{1}{{\\left( { - {S_2} - \\frac{1}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right)}}} \\right\\}\n\\nonumber\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\quad + \\sum\\limits_{{i_m} = 1}^N {\\int\\limits_0^\\infty {d{u_m}\\frac{1}{{{{\\bar \\gamma }_{{i_m}}}}}\\exp \\left( { - \\frac{{{u_m}}}{{{{\\bar \\gamma }_{{i_m}}}}}} \\right)} } \\sum\\limits_{\\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{m - 1}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\}} \\right)} {\\sum\\limits_{\\scriptstyle k = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}}^{m - 1} {{C_{k,1,m-1}}\\exp \\left( { - \\sum\\limits_{l = 1}^{m - 1} {\\left( {\\frac{{{u_m}}}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right)} } \\right)L_{{S_1}}^{ - 1}\\left\\{ {\\frac{{\\exp \\left( { - m{u_m}{S_1}} \\right)}}{{\\left( { - {S_1} - \\frac{1}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)}}} \\right\\}} } \\nonumber \\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\quad \\times \\sum\\limits_{\\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{N - m}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\} - \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}} \\right)} {\\sum\\limits_{\\scriptstyle q = m + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\}}^N {{C_{q,m+1,N}}\\left[ {\\sum\\limits_{h = 1}^{N - m} {\\left\\{ {{{\\left( { - 1} \\right)}^h}\\sum\\limits_{{j_1} = {j_0} + m + 1}^{N - h + 1} { \\cdots \\sum\\limits_{{j_h} = {j_{h - 1}} + 1}^N {\\exp \\left( { - \\sum\\limits_{m = 1}^h {\\frac{{{u_m}}}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}} } \\right)} } } \\right\\}L_{{S_2}}^{ - 1}\\left\\{ {\\frac{{\\exp \\left( { - h{u_m}{S_2}} \\right)}}{{\\left( { - {S_2} - \\frac{1}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right)}}} \\right\\}} } \\right]} },\n\\end{eqnarray} \\normalsize\nwhere\n\\begin{equation} \\footnotesize\nL_{{S_2}}^{ - 1}\\left\\{ {\\frac{1}{{\\left( { - {S_2} - \\frac{1}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right)}}} \\right\\} = - \\exp \\left( { - \\frac{{{z_2}}}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right),\n\\end{equation}\n\\begin{equation} \\footnotesize\nL_{{S_1}}^{ - 1}\\left\\{ {\\frac{{\\exp \\left( { - m{u_m}{S_1}} \\right)}}{{ - {S_1} - \\frac{1}{{{{\\bar \\gamma }_{{i_k}}}}}}}} \\right\\} = - \\exp \\left( -{\\frac{{{z_1} - m{u_m}}}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)U\\left( {{z_1} - m{u_m}} \\right),\n\\end{equation}\n\\begin{equation} \\footnotesize\nL_{{S_2}}^{ - 1}\\left\\{ {\\frac{{\\exp \\left( { - h{u_m}{S_2}} \\right)}}{{ - {S_2} - \\frac{1}{{{{\\bar \\gamma }_{{i_q}}}}}}}} \\right\\} = - \\exp \\left( -{\\frac{{{z_2} - h{u_m}}}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right)U\\left( {{z_2} - h{u_m}} \\right).\n\\end{equation}\n\n\\item[4)] PDF of $\\sum\\limits_{n = 1}^{N_s } {u_n}$:\n\\footnotesize\\begin{eqnarray} \\label{eq:non_closed_form_4}\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&{p_Z}\\left( {{z_1},{z_2}} \\right) = L_{{S_1},{S_2}}^{ - 1}\\left\\{ {{\\mu _Z}\\left( { - {S_1}, - {S_2}} \\right)} \\right\\} \\nonumber \\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&= \\sum\\limits_{{i_{{N_s}}} = 1}^N {\\int\\limits_0^\\infty {d{u_{{N_s}}}\\frac{1}{{{{\\bar \\gamma }_{{i_{{N_s}}}}}}}\\exp \\left( { - \\frac{{{u_{{N_s}}}}}{{{{\\bar \\gamma }_{{i_{{N_s}}}}}}}} \\right)L_{{S_2}}^{ - 1}\\left\\{ {\\exp \\left( { - {S_2}{u_{{N_s}}}} \\right)} \\right\\}\\sum\\limits_{\\substack{\n {i_{{N_s} + 1}}, \\ldots ,{i_N} \\\\ \n {i_{{N_s} + 1}} \\ne \\cdots \\ne {i_N} \\\\ \n {i_{{N_s} + 1}} \\ne {i_{{N_s}}} \\\\ \n \\vdots \\\\ \n {i_N} \\ne {i_{{N_s}}} \\\\ \n }}^{1,2, \\ldots ,N} {\\prod\\limits_{\\scriptstyle k = {N_s} + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{{N_s} + 1}}, \\ldots ,{i_N}} \\right\\}}^N {\\left\\{ {1 - \\exp \\left( { - \\frac{{{u_{{N_s}}}}}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)} \\right\\}} } } } \\nonumber \\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\quad \\times \\sum\\limits_{\\left\\{ {{i_1}, \\ldots ,{i_{{N_s} - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{{N_s} - 1}}\\left( {{I_N} - \\left\\{ {{i_{{N_s}}}} \\right\\} - \\left\\{ {{i_{{N_s} + 1}}, \\ldots ,{i_N}} \\right\\}} \\right)} {\\prod\\limits_{\\scriptstyle q = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1}, \\ldots ,{i_{{N_s} - 1}}} \\right\\}}^{{N_s} - 1} {{C_{q,1,{N_s} - 1}}\\exp \\left( { - \\sum\\limits_{l = 1}^{{N_s} - 1} {\\left( {\\frac{{{u_{{N_s}}}}}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right)} } \\right)L_{{S_1}}^{ - 1}\\left\\{ {\\frac{{\\exp \\left( { - \\left( {{N_s} - 1} \\right){u_{{N_s}}}{S_1}} \\right)}}{{\\left( { - {S_1} - \\frac{1}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right)}}} \\right\\}} },\n\\end{eqnarray} \\normalsize\nwhere\n\\begin{equation} \\footnotesize\nL_{{S_2}}^{ - 1}\\left\\{ {\\exp \\left( { - {S_2}{u_{{N_s}}}} \\right)} \\right\\} = \\delta \\left( {{z_2} - {u_{{N_s}}}} \\right),\n\\end{equation}\n\\begin{equation} \\footnotesize \nL_{{S_1}}^{ - 1}\\left\\{ {\\frac{{\\exp \\left( { - \\left( {{N_s} - 1} \\right){u_{{N_s}}}{S_1}} \\right)}}{{\\left( { - {S_1} - \\frac{1}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right)}}} \\right\\} = - \\exp \\left( { - \\frac{{{z_1} - \\left( {{N_s} - 1} \\right){u_{{N_s}}}}}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right)U\\left( {{z_1} - \\left( {{N_s} - 1} \\right){u_{{N_s}}}} \\right).\n\\end{equation}\n\n\\item[5)] Joint PDF of $u_m$ and $\\sum\\limits_{\\scriptstyle n = 1 \\hfill \\atop \\scriptstyle n \\ne m \\hfill}^{N_s } {u_{n} }$ for $1 T} \\right],\n\\end{equation}\nwhere $0 < T < 1$ and $m < N$.\nIf we assume $Z=[Z_1, Z_2]$, $Z_1=\\sum\\limits_{n = 1}^{m}{u_{n}}$ and $Z_2=\\sum\\limits_{n = m+1}^N{u_{n}}$, then (\\ref{eq:Prob_capture}) can be calculated in terms of the 2-dimensional joint PDF of $Z_1$ and $Z_2$ easily as\n\\begin{equation} \\small \\label{eq:Capture_probability_closed_form_1}\n\\text{Prob}_{GSC-capture}=\\text{Pr} \\left[ {\\frac{{{Z_1}}}{{{Z_1} + {Z_2}}} > T} \\right] = \\int_0^\\infty {\\int_0^{\\left( {\\frac{{1 - T}}{T}} \\right){z_1}} {{p_Z}\\left( {{z_1},{z_2}} \\right)d{z_2}d{z_1}} }. \n\\end{equation}\nThe joint PDF of $\\sum\\limits_{n = 1}^{m}{u_{n}}$ and $\\sum\\limits_{n = m+1}^N{u_{n}}$, ${p_Z}\\left( {{z_1},{z_2}} \\right)$ can be derived with the help of our extended approach in this paper. More specifically, inserting (\\ref{eq:non_closed_form_3}) into (\\ref{eq:Capture_probability_closed_form_1}), the closed-form expression for i.n.d. Rayleigh fading conditions is shown at the top of the next page (refer to Appendix-\\ref{AP:capture_prob_GSC} for details).\n\\begin{figure*} [!h]\n\\setcounter{equation}{62}\n\\tiny\n\\begin{eqnarray} \\label{eq:Capture_probability_closed_form_2}\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!{\\text{Prob}_{GSC-capture}} \\nonumber\n\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&=&\\!\\!\\!\\!\\!\\!\\!\\sum\\limits_{{i_m} = 1}^N {\\frac{1}{{{{\\bar \\gamma }_{{i_m}}}}}\\sum\\limits_{\\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{m - 1}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\}} \\right)} {\\sum\\limits_{\\scriptstyle k = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}}^{m - 1} {{C_{k,1,m-1}}\\sum\\limits_{\\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{N - m}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\} - \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}} \\right)} {\\sum\\limits_{\\scriptstyle q = m + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\}}^N {{C_{q,m+1,N}}} } } } } \\nonumber\n\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\!\\!\\!\\!\\!\\!\\!\\quad \\times\\left[ {\\frac{1}{{\\left( {\\sum\\limits_{l = 1}^m {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right) - \\frac{m}{{{{\\bar \\gamma }_{{i_k}}}}}} } \\right)}}} \\right]\\int_0^\\infty {\\int_0^{\\left( {\\frac{{1 - T}}{T}} \\right){z_1}} {\\exp \\left( { - \\frac{{{z_2}}}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right)\\exp \\left( { - \\frac{{{z_1}}}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)d{z_2}d{z_1}} } \\nonumber\n\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\!\\!\\!\\!\\!\\!\\!- \\sum\\limits_{{i_m} = 1}^N {\\frac{1}{{{{\\bar \\gamma }_{{i_m}}}}}\\sum\\limits_{\\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{m - 1}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\}} \\right)} {\\sum\\limits_{\\scriptstyle k = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}}^{m - 1} {{C_{k,1,m-1}}\\sum\\limits_{\\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{N - m}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\} - \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}} \\right)} {\\sum\\limits_{\\scriptstyle q = m + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\}}^N {{C_{q,m+1,N}}} } } } } \\nonumber\n\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\!\\!\\!\\!\\!\\!\\!\\quad \\times \\left[ {\\frac{1}{{\\left( {\\sum\\limits_{l = 1}^m {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right) - \\frac{m}{{{{\\bar \\gamma }_{{i_k}}}}}} } \\right)}}} \\right]\\int_0^\\infty {\\int_0^{\\left( {\\frac{{1 - T}}{T}} \\right){z_1}} {\\exp \\left( { - \\frac{{{z_2}}}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right)\\exp \\left( { - \\left( {\\sum\\limits_{l = 1}^m {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right)} } \\right)\\frac{{{z_1}}}{m}} \\right)d{z_2}d{z_1}} }\\nonumber\n\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\!\\!\\!\\!\\!\\!\\!+ \\sum\\limits_{{i_m} = 1}^N {\\frac{1}{{{{\\bar \\gamma }_{{i_m}}}}}\\sum\\limits_{\\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{m - 1}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\}} \\right)} {\\sum\\limits_{\\scriptstyle k = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}}^{m - 1} {{C_{k,1,m-1}}\\sum\\limits_{\\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{N - m}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\} - \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}} \\right)} {\\sum\\limits_{\\scriptstyle q = m + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\}}^N {{C_{q,m+1,N}}} } } } } \\nonumber\n\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\!\\!\\!\\!\\!\\!\\!\\quad \\times \\left[ \\!{\\sum\\limits_{h = 1}^{N - m} \\!{\\left\\{\\! {{{\\left(\\! { - 1} \\!\\right)}^h}\\!\\sum\\limits_{{j_1} = {j_0} + m + 1}^{N - h + 1} \\!{ \\cdots \\!\\sum\\limits_{{j_h} = {j_{h - 1}} + 1}^N \\!{\\left(\\! {\\frac{1}{{\\left( \\!{\\sum\\limits_{m = 1}^h \\!{\\left( \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}} \\!\\right) \\!+ \\!\\sum\\limits_{l = 1}^m \\!{\\left(\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}}\\! \\right) \\!- \\!\\frac{m}{{{{\\bar \\gamma }_{{i_k}}}}} - \\frac{h}{{{{\\bar \\gamma }_{{i_q}}}}}} } } \\!\\right)}}} \\!\\right)\\!\\int_0^\\infty \\! {\\int_0^{\\left(\\! {\\frac{{1 - T}}{T}} \\!\\right){z_1}}\\! {\\exp \\!\\left(\\! { - \\frac{{{z_1}}}{{{{\\bar \\gamma }_{{i_k}}}}}} \\!\\right)\\!\\exp \\!\\left(\\! { - \\frac{{{z_2}}}{{{{\\bar \\gamma }_{{i_q}}}}}}\\!\\right)\\!U\\left(\\! {\\frac{{{z_1}}}{m} - \\frac{{{z_2}}}{h}} \\!\\right)\\!d{z_2}d{z_1}} } } } } \\!\\right\\}} } \\!\\right] \\nonumber\n\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\!\\!\\!\\!\\!\\!\\! - \\sum\\limits_{{i_m} = 1}^N {\\frac{1}{{{{\\bar \\gamma }_{{i_m}}}}}\\sum\\limits_{\\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{m - 1}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\}} \\right)} {\\sum\\limits_{\\scriptstyle k = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}}^{m - 1} {{C_{k,1,m-1}}\\sum\\limits_{\\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{N - m}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\} - \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}} \\right)} {\\sum\\limits_{\\scriptstyle q = m + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\}}^N {{C_{q,m+1,N}}} } } } } \\nonumber\n\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\!\\!\\!\\!\\!\\!\\!\\quad \\times \\Vast[ {\\sum\\limits_{h = 1}^{N - m} {{{\\left( { - 1} \\right)}^h}\\sum\\limits_{{j_1} = {j_0} + m + 1}^{N - h + 1} { \\cdots \\sum\\limits_{{j_h} = {j_{h - 1}} + 1}^N {\\left( {\\frac{1}{{\\left( {\\sum\\limits_{m = 1}^h {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}} \\right) + \\sum\\limits_{l = 1}^m {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right) - \\frac{m}{{{{\\bar \\gamma }_{{i_k}}}}} - \\frac{h}{{{{\\bar \\gamma }_{{i_q}}}}}} } } \\right)}}} \\right)} } } }\\nonumber\n\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\!\\!\\!\\!\\!\\!\\!\\quad \\quad \\quad \\times {\\int_0^\\infty {\\int_0^{\\left( {\\frac{{1 - T}}{T}} \\right){z_1}} {\\exp \\left( { - \\frac{{{z_1}}}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)\\exp \\left( { - \\left( {\\sum\\limits_{m = 1}^h {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}} \\right) + \\sum\\limits_{l = 1}^m {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right) - \\frac{m}{{{{\\bar \\gamma }_{{i_k}}}}}} } } \\right)\\frac{{{z_2}}}{h}} \\right)U\\left( {\\frac{{{z_1}}}{m} - \\frac{{{z_2}}}{h}} \\right)d{z_2}d{z_1}} } }\\Vast] \\nonumber \n\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\!\\!\\!\\!\\!\\!\\! + \\sum\\limits_{{i_m} = 1}^N {\\frac{1}{{{{\\bar \\gamma }_{{i_m}}}}}\\sum\\limits_{\\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{m - 1}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\}} \\right)} {\\sum\\limits_{\\scriptstyle k = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}}^{m - 1} {{C_{k,1,m-1}}\\sum\\limits_{\\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{N - m}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\} - \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}} \\right)} {\\sum\\limits_{\\scriptstyle q = m + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\}}^N {{C_{q,m+1,N}}} } } } } \\nonumber\n\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\!\\!\\!\\!\\!\\!\\!\\quad \\times \\Vast[ {\\sum\\limits_{h = 1}^{N - m} {{{\\left( { - 1} \\right)}^h}\\sum\\limits_{{j_1} = {j_0} + m + 1}^{N - h + 1} { \\cdots \\sum\\limits_{{j_h} = {j_{h - 1}} + 1}^N {\\left( {\\frac{1}{{\\left( {\\sum\\limits_{m = 1}^h {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}} \\right) + \\sum\\limits_{l = 1}^m {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right) - \\frac{m}{{{{\\bar \\gamma }_{{i_k}}}}} - \\frac{h}{{{{\\bar \\gamma }_{{i_q}}}}}} } } \\right)}}} \\right)} } } } \\nonumber\n\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\!\\!\\!\\!\\!\\!\\!\\quad \\quad \\quad \\times {\\int_0^\\infty {\\int_0^{\\left( {\\frac{{1 - T}}{T}} \\right){z_1}} {\\exp \\left( { - \\frac{{{z_1}}}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)\\exp \\left( { - \\frac{{{z_2}}}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right)\\left[ {1 - U\\left( {\\frac{{{z_1}}}{m} - \\frac{{{z_2}}}{h}} \\right)} \\right]d{z_2}d{z_1}} } } \\Vast] \\nonumber\n\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\!\\!\\!\\!\\!\\!\\! - \\sum\\limits_{{i_m} = 1}^N {\\frac{1}{{{{\\bar \\gamma }_{{i_m}}}}}\\sum\\limits_{\\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{m - 1}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\}} \\right)} {\\sum\\limits_{\\scriptstyle k = 1 \\atop \n \\scriptstyle \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}}^{m - 1} {{C_{k,1,m-1}}\\sum\\limits_{\\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\} \\in {{\\mathop{\\rm P}\\nolimits} _{N - m}}\\left( {{I_N} - \\left\\{ {{i_m}} \\right\\} - \\left\\{ {{i_1}, \\ldots ,{i_{m - 1}}} \\right\\}} \\right)} {\\sum\\limits_{\\scriptstyle q = m + 1 \\atop \n \\scriptstyle \\left\\{ {{i_{m + 1}}, \\ldots ,{i_N}} \\right\\}}^N {{C_{q,m+1,N}}} } } } } \\nonumber\n\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\!\\!\\!\\!\\!\\!\\!\\quad \\times \\Vast[{\\sum\\limits_{h = 1}^{N - m} {{{\\left( { - 1} \\right)}^h}\\sum\\limits_{{j_1} = {j_0} + m + 1}^{N - h + 1} { \\cdots \\sum\\limits_{{j_h} = {j_{h - 1}} + 1}^N {\\left( {\\frac{1}{{\\left( {\\sum\\limits_{m = 1}^h {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}} \\right) + \\sum\\limits_{l = 1}^m {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right) - \\frac{m}{{{{\\bar \\gamma }_{{i_k}}}}} - \\frac{h}{{{{\\bar \\gamma }_{{i_q}}}}}} } } \\right)}}} \\right)} } } } \\nonumber\n\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\!\\!\\!\\!\\!\\!\\!\\quad \\quad \\quad \\times {\\int_0^\\infty {\\int_0^{\\left( {\\frac{{1 - T}}{T}} \\right){z_1}} {\\exp \\left( { - \\frac{{{z_2}}}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right)\\exp \\left( { - \\left( {\\sum\\limits_{m = 1}^h {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}} \\right) + \\sum\\limits_{l = 1}^m {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right) - \\frac{h}{{{{\\bar \\gamma }_{{i_q}}}}}} } } \\right)\\frac{{{z_1}}}{m}} \\right)\\left[ {1 - U\\left( {\\frac{{{z_1}}}{m} - \\frac{{{z_2}}}{h}} \\right)} \\right]d{z_2}d{z_1}} } } \\Vast].\n\\end{eqnarray}\n\\hrulefill\n\\end{figure*}\n\\clearpage\nThe closed-form expressions of integral parts in the expression presented in (\\ref{eq:Capture_probability_closed_form_2}) can be derived as \n\\begin{enumerate}\n\\item[i)] The first integral part:\n\\begin{equation} \\small \\label{eq:Capture_probability_closed_form_int_1}\n\\int_0^\\infty {\\int_0^{\\left( {\\frac{{1 - T}}{T}} \\right){z_1}} {\\exp \\left( { - \\frac{{{z_2}}}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right)\\exp \\left( { - \\frac{{{z_1}}}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)d{z_2}d{z_1}} } ={{\\bar \\gamma }_{{i_q}}}{{\\bar \\gamma }_{{i_k}}} - \\frac{{{{\\bar \\gamma }_{{i_q}}}}}{{\\left( {\\frac{1}{{T \\cdot {{\\bar \\gamma }_{{i_q}}}}} + \\frac{1}{{{{\\bar \\gamma }_{{i_k}}}}} - \\frac{1}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right)}}.\n\\end{equation}\n\\item[ii)] The second integral part:\n \\small\\begin{eqnarray} \\label{eq:Capture_probability_closed_form_int_2}\n&&\\int_0^\\infty {\\int_0^{\\left( {\\frac{{1 - T}}{T}} \\right){z_1}} {\\exp \\left( { - \\frac{{{z_2}}}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right)\\exp \\left( { - \\left( {\\sum\\limits_{l = 1}^m {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right)} } \\right)\\frac{{{z_1}}}{m}} \\right)d{z_2}d{z_1}} } \\nonumber\n\\\\\n&&=\\frac{{{{\\bar \\gamma }_{{i_q}}}}}{{\\left( {\\sum\\limits_{l = 1}^m {\\left( {\\frac{1}{{m \\cdot {{\\bar \\gamma }_{{i_l}}}}}} \\right)} } \\right)}} - \\frac{{{{\\bar \\gamma }_{{i_q}}}}}{{\\left( {\\sum\\limits_{l = 1}^m {\\left( {\\frac{1}{{m \\cdot {{\\bar \\gamma }_{{i_l}}}}}} \\right) + \\frac{{1 - T}}{{T \\cdot {{\\bar \\gamma }_{{i_q}}}}}} } \\right)}}.\n\\end{eqnarray}\\normalsize\n\\item[iii)] The third integral part:\n\\small\\begin{eqnarray} \\label{eq:Capture_probability_closed_form_int_3}\n&&\\int_0^\\infty {\\int_0^{\\left( {\\frac{{1 - T}}{T}} \\right){z_1}} {\\exp \\left( { - \\frac{{{z_1}}}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)\\exp \\left( { - \\frac{{{z_2}}}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right)U\\left( {\\frac{{{z_1}}}{m} - \\frac{{{z_2}}}{h}} \\right)d{z_2}d{z_1}} } \\nonumber\n\\\\\n&&={{\\bar \\gamma }_{{i_q}}}{{\\bar \\gamma }_{{i_k}}}U\\left( {\\frac{1}{m} - \\frac{{1 - T}}{{T \\cdot h}}} \\right) - \\frac{{{{\\bar \\gamma }_{{i_q}}}}}{{\\left( {\\frac{{1 - T}}{{{{\\bar \\gamma }_{{i_q}}}T}} + \\frac{1}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)}}U\\left( {\\frac{1}{m} - \\frac{{1 - T}}{{T \\cdot h}}} \\right) \\nonumber\n\\\\\n&&\\quad+ {{\\bar \\gamma }_{{i_q}}}{{\\bar \\gamma }_{{i_k}}}\\left[ {1 - U\\left( {\\frac{1}{m} - \\frac{{1 - T}}{{T \\cdot h}}} \\right)} \\right] - \\frac{{{{\\bar \\gamma }_{{i_q}}}}}{{\\left( {\\frac{h}{{{{\\bar \\gamma }_{{i_q}}}m}} + \\frac{1}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)}}\\left[ {1 - U\\left( {\\frac{1}{m} - \\frac{{1 - T}}{{T \\cdot h}}} \\right)} \\right].\n\\end{eqnarray} \\normalsize\n\\item[iv)] The forth integral part:\n\\small\\begin{eqnarray} \\label{eq:Capture_probability_closed_form_int_4}\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\int_0^\\infty \\! {\\int_0^{\\left( \\!{\\frac{{1 - T}}{T}} \\!\\right){z_1}} \\!{\\exp \\left( \\!{ - \\frac{{{z_1}}}{{{{\\bar \\gamma }_{{i_k}}}}}} \\!\\right)\\!\\exp \\left( \\!{ - \\left( {\\sum\\limits_{m = 1}^h {\\left(\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}}\\! \\right) + \\sum\\limits_{l = 1}^m \\!{\\left( \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\!\\right) - \\frac{m}{{{{\\bar \\gamma }_{{i_k}}}}}} } } \\!\\right)\\frac{{{z_2}}}{h}} \\!\\right)\\!U\\left(\\! {\\frac{{{z_1}}}{m} - \\frac{{{z_2}}}{h}} \\!\\right)d{z_2}d{z_1}} } \\nonumber\n\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&=\\frac{{{{\\bar \\gamma }_{{i_k}}}h}}{{\\left( {\\sum\\limits_{m = 1}^h {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}} \\right) + \\sum\\limits_{l = 1}^m {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right) - \\frac{m}{{{{\\bar \\gamma }_{{i_k}}}}}} } } \\right)}}U\\left( {\\frac{1}{m} - \\frac{{1 - T}}{{T \\cdot h}}} \\right)\\nonumber\n\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\quad- \\frac{h}{{\\left(\\! {\\sum\\limits_{m = 1}^h \\!{\\left( \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}} \\!\\right) + \\sum\\limits_{l = 1}^m \\!{\\left(\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\!\\right) - \\frac{m}{{{{\\bar \\gamma }_{{i_k}}}}}} } } \\!\\right)\\!\\left\\{\\! {\\left(\\! {\\sum\\limits_{m = 1}^h \\!{\\left( \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}} \\!\\right) + \\sum\\limits_{l = 1}^m \\!{\\left( \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\!\\right) - \\frac{m}{{{{\\bar \\gamma }_{{i_k}}}}}} } } \\!\\right)\\frac{{1 - T}}{{T \\cdot h}} + \\frac{1}{{{{\\bar \\gamma }_{{i_k}}}}}} \\!\\right\\}}}U\\left( \\!{\\frac{1}{m} - \\frac{{1 - T}}{{T \\cdot h}}} \\!\\right)\\nonumber\n\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&& \\quad + \\frac{{{{\\bar \\gamma }_{{i_k}}}h}}{{\\left( {\\sum\\limits_{m = 1}^h {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}} \\right) + \\sum\\limits_{l = 1}^m {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right) - \\frac{m}{{{{\\bar \\gamma }_{{i_k}}}}}} } } \\right)}}\\left[ {1 - U\\left( {\\frac{1}{m} - \\frac{{1 - T}}{{T \\cdot h}}} \\right)} \\right]\\nonumber\n\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\quad- \\frac{h}{{\\left(\\! {\\sum\\limits_{m = 1}^h \\!{\\left(\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}}\\! \\right) + \\sum\\limits_{l = 1}^m \\!{\\left( \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\!\\right) - \\frac{m}{{{{\\bar \\gamma }_{{i_k}}}}}} } }\\! \\right)\\!\\left\\{\\! {\\left(\\! {\\sum\\limits_{m = 1}^h \\!{\\left(\\! {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}} \\!\\right) + \\sum\\limits_{l = 1}^m \\!{\\left( \\!{\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}}\\! \\right) - \\frac{m}{{{{\\bar \\gamma }_{{i_k}}}}}} } } \\!\\right)\\frac{1}{m} + \\frac{1}{{{{\\bar \\gamma }_{{i_k}}}}}} \\!\\right\\}}}\\left[\\! {1 - U\\left(\\! {\\frac{1}{m} - \\frac{{1 - T}}{{T \\cdot h}}} \\!\\right)} \\!\\right].\n\\end{eqnarray} \\normalsize\n\\item[v)] The fifth integral part:\n\\small\\begin{eqnarray} \\label{eq:Capture_probability_closed_form_int_5}\n&&\\int_0^\\infty {\\int_0^{\\left( {\\frac{{1 - T}}{T}} \\right){z_1}} {\\exp \\left( { - \\frac{{{z_1}}}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)\\exp \\left( { - \\frac{{{z_2}}}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right)\\left[ {1 - U\\left( {\\frac{{{z_1}}}{m} - \\frac{{{z_2}}}{h}} \\right)} \\right]d{z_2}d{z_1}} } \\nonumber\n\\\\\n&&=\n\\frac{{{{\\bar \\gamma }_{{i_q}}}}}{{\\left( {\\frac{h}{{m \\cdot {{\\bar \\gamma }_{{i_q}}}}} + \\frac{1}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)}}U\\left( {\\frac{{1 - T}}{{T \\cdot h}} - \\frac{1}{m}} \\right) - \\frac{{{{\\bar \\gamma }_{{i_q}}}}}{{\\left( {\\frac{{1 - T}}{{T \\cdot {{\\bar \\gamma }_{{i_q}}}}} + \\frac{1}{{{{\\bar \\gamma }_{{i_k}}}}}} \\right)}}U\\left( {\\frac{{1 - T}}{{T \\cdot h}} - \\frac{1}{m}} \\right).\n\\end{eqnarray} \\normalsize\n\\item[vi)] The sixth integral part:\n\\small\\begin{eqnarray} \\label{eq:Capture_probability_closed_form_int_6}\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\int_0^\\infty {\\int_0^{\\left( {\\frac{{1 - T}}{T}} \\right){z_1}} {\\exp \\left( { - \\frac{{{z_2}}}{{{{\\bar \\gamma }_{{i_q}}}}}} \\right)\\exp \\left( { - \\left( {\\sum\\limits_{m = 1}^h {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}} \\right) + \\sum\\limits_{l = 1}^m {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right) - \\frac{h}{{{{\\bar \\gamma }_{{i_q}}}}}} } } \\right)\\frac{{{z_1}}}{m}} \\right)\\left[ {1 - U\\left( {\\frac{{{z_1}}}{m} - \\frac{{{z_2}}}{h}} \\right)} \\right]d{z_2}d{z_1}} } \\nonumber\n\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&& =\\frac{{m \\cdot {{\\bar \\gamma }_{{i_q}}}}}{{\\left( {\\sum\\limits_{m = 1}^h {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}} \\right) + \\sum\\limits_{l = 1}^m {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right)} } } \\right)}}U\\left( {\\frac{{1 - T}}{{T \\cdot h}} - \\frac{1}{m}} \\right) \\nonumber\n\\\\\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!&&\\quad- \\frac{{m \\cdot {{\\bar \\gamma }_{{i_q}}}}}{{\\left\\{ {\\left( {\\sum\\limits_{m = 1}^h {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_{{j_m}}}}}}}} \\right) + \\sum\\limits_{l = 1}^m {\\left( {\\frac{1}{{{{\\bar \\gamma }_{{i_l}}}}}} \\right) - \\frac{h}{{{{\\bar \\gamma }_{{i_q}}}}}} } } \\right) + \\frac{{m\\left( {1 - T} \\right)}}{{T \\cdot {{\\bar \\gamma }_{{i_q}}}}}} \\right\\}}}U\\left( {\\frac{{1 - T}}{{T \\cdot h}} - \\frac{1}{m}} \\right).\n\\end{eqnarray} \\normalsize\n\\end{enumerate}\n\n\n\\subsection{Finger Replacement Schemes for RAKE Receivers in the Soft Handover Region over i.n.d. fading channels}\nRecently, new finger replacement techniques for RAKE reception in the soft handover (SHO) region \\cite{kn:S_Choi_2008_1} has been proposed and analyzed over independent and identical fading (i.i.d.) channel. The proposed schemes are basically based on the block comparison among groups of resolvable paths from different base stations and lead to the reduction of complexity while offering commensurate performance. If we let $Y = \\sum\\limits_{i = 1}^{{L_c} - {L_s}} {{u_i}}$, ${W_1} = \\sum\\limits_{i = {L_c} - {L_s} + 1}^{{L_c}} {{u_i}}$ and ${W_n} = \\sum\\limits_{i = 1}^{{L_s}} {{v_i^n}}$ (for $n = 2, \\ldots ,N$), where $u_i$ ($i=1,2,\\ldots,L_1$) and $v_i^n$ ($i=1,2,\\ldots,L_n$) are the order statistics obtained by arranging $L_n$ nonnegative i.n.d. path SNRs corresponding to the $n$th base station ($2\\le n\\le N$) in descending order, then the RAKE combiner output SNR with GSC is given by $Y + \\max_n{W_n}$. $Y$ and $W_1$ are dependent but $Y$ and $W_n$ are independent. In practice, the i.i.d. fading assumption on the diversity paths is not always realistic due to, for example, the different adjacent multipath routes with the same path loss. Although non-identical fading is important for practical implementation, \\cite{kn:S_Choi_2008_1} have only investigated the non-uniform power delay profile case only through computer simulation due to the high analysis complexity. The major difficulty in this problem is to derive the joint statistics of ordered exponential variates over non-identical fading assumptions, which can be obtained by applying Theorem~\\ref{th:case2_1} and \\ref{th:case2_3} of section V. Due to space limitation, the analytical details are omitted in this work.\n\n\\subsection{Outage Probability of GSC RAKE Receivers Over i.n.d. Rayleigh Fading Channel subject to self-interference}\nRecently, the outage probability of GSC RAKE receivers subject to self-interference over independent and identically distributed Rayleigh fading channels has been investigated in~\\cite{kn:capture_outage_GSC}. Let $\\gamma_{i}$ be the SNR of the $i$-th diversity path and $u_i$ ($i=1,2,\\ldots,N$) be the order statistics obtained by arranging $N$ $(N\\ge2)$ nonnegative i.n.d. RVs, $\\left\\{ {\\gamma_{i} } \\right\\}_{i = 1}^N$, in decreasing order of magnitude such that $u_1 \\ge u_2 \\ge \\cdots \\ge u_N$. Then, the outage probability, denoted by $\\rm{P_{Out}}$, is then defined as~\\cite{kn:capture_outage_GSC},\n\\begin{equation} \\label{eq:Prob_Outage} \\small\n{\\rm{P_{Out}}}={\\rm{Pr}}\\left[ \\frac{\\sum\\limits_{n = 1}^{m}{{u_{n}}}}{1+\\alpha\\sum\\limits_{n = m+1}^N{{u_{n}}}}